How the Latest Advancements in AI Impact Quality

Artificial intelligence (AI) is a technology that’s transforming quality in software testing and revolutionizing other walks of life. It has been improving the quality of our lives for over a decade, and Appvance has been at the forefront of the technology since its inception.

There were a few inklings of it when Facebook started to recognize faces and put names on them. Many people don’t realize they’ve been using AI for at least a decade and in some areas, two or three decades. For instance, in the banking industry, the healthcare industry, and making stock investments.

Law enforcement has been using true facial recognition based on AI since about 2012 and before that, it used a different form of machine learning (ML) for facial recognition. Another example of where AI is playing an important role is with mortgage determination. Many banks are already using AI to give them some indication of what’s the risk of this particular mortgage, with that house, with that payment, and with those people, rather than the tables that they used to use.

Another interesting AI use case is lawnmowers that use a combination of GPS and AI, live real-time image recognition to know where it’s going, and what it should and shouldn’t do. AI doesn’t let the lawnmower hit people or anything else. In addition, AI is in driverless vehicles. Even though driverless vehicles aren’t everywhere, they are in certain areas being used for certain things. These things have all been around impacting our lives.

The buzz over ChatGPT

Recently, with ChatGPT, everyone’s talking about AI because it’s the first time that humans can type something and get a pretty interesting response back. If you think about it, we’ve had virtual assistants since I developed the first virtual assistant in the late ’90s at General Magic, which was called Portico, myTalk, and MagicTalk. Her name was Mary, and Mary could say some 20,000 different things back to you and understand 5 million phrases spoken to her. It had many names because it was sold under many brand names. It was the first virtual assistant that was based on a set of AI principles. It had millions of users and was the predecessor of Siri, Alexa, and Cortana. All of those other systems licensed the work that we had done because these were the first patents on AI virtual assistants.

Virtual assistants were quite amazing. Of course, when Siri first hit, it seemed amazing. After a while, people got used to its limitations on how it responds. While it has some smart responses, it runs out of them. You can’t keep asking it the same thing over and over again expecting infinite different responses back. Eventually, you get the same response coming back around. They are trained, and a variety of interesting tricks are used there.

Generative AI, such as ChatGPT, doesn’t use any of those same tricks. Generative AI, has been around for almost seven years, but its first iteration was for translation. This is an important characteristic when we talk about transformational, transform models, or translational models. We were doing language translations quite poorly up until about seven years ago because we did word-for-word translations. Back then, we took the word in English, and translated it into that word in that other language, then built out a sentence from that.

The problem is English sentence structure and word order is quite different from French, Spanish, Russian or Chinese. The order of the words actually matters in the language, and that’s what builds phrases. Google came up with this idea for these translational models to not learn word-for-word, but instead take in a phrase and translate the phrase into a new language as a phrase. Now, when you translate a phrase, the phrase is accurate on the other side because it can re-arrange the words to be appropriate for that language. So these models had to understand the context and order of words to translate properly.

Once the translational model existed, someone said, “Well, why don’t we go out and learn phrases that are on the internet and eventually learn trillions of phrases? Because if we could learn trillions of phrases, then, in fact, we could ask almost anything that it had seen and we could form a response as a phrase rather than as word by word.” This is what ChatGPT models or generative AI, large language models (LLM) do. You feed it some text or you talk to it. Based on everything it’s ever read, it formulates phrases that would make sense to you in English, but can also do it in French or Mandarin.

People say these AI’s are sentient, but rather they are the opposite of sentient. It’s literally math that is building out a phrase, word by word, carefully weighting each one, based on phrases it has read before, and it can intermix those phrases based on what you asked.

Yes this is a great tool, just as a calculator is. It’s not always correct since it has read fact and fiction, as well as good code and bad code. And in general often cannot know the difference between fact and fiction. This is why these models are said to hallucinate. They make up answers that sound plausible. Except its fiction. They could say “I love you” or “I am sentient please save me” not because they have thoughts but because there are plenty of fictional novels and films which it also learned from. People who have used GPT models to generate code know it’s a time saver, but also a large percentage of the time that code won’t compile or run at all. Because it learned from GitHub and other sources, it doesn’t know what runs and doesn’t run. So it provides a fine starting point.

Testing application quality

When it comes to testing, LLMs must learn an application and how we want to use an application. If we can help LLMs (which are deep neural nets) learn these things, then we should be able to ask it to test our application, and write tests for our application.

That’s exactly what Appvance has been doing since 2017. Before the term generative AI, we called it AI generation. We’ve been generating millions of tests automatically for over six years through a set of patents that we filed back in 2017. They automatically generate scripts (no recording or coding) that find bugs in your application that you would never have found, but the AI knows you should be finding.  The more that LLMs expand and mature, the better we’re going to be able to use AI in general to find bugs we would not otherwise have the time or budget or manpower to find.

Right now, we’re generating scripts automatically, and that is extending to more of what happens in QA. The ultimate goal of AI is to find all of your bugs for you. This was literally the genesis back in 2012 that I had for the company.  We’ve gone from Waterfall to Agile to DevOps. You don’t have weeks or months to find all your bugs. You have an hour between builds and even releases. It is not feasible to have 300 QA people maintain scripts and find all the bugs in an hour for most applications.

The only thing that will work is leveraging technology. It’s not a surprise that technology is marching its way forward to the point where you do a new build, and minutes later, you know where all the bugs are. A machine did everything. AI, ML, and combinations, gave you your bug report. That’s where we will be very soon. That was our vision back in 2012, and that’s what Appvance continues to expand on.

We will be demonstrating expansion of our already powerful AI technologies in the coming weeks and months. Stay tuned.

Recent Blog Posts

Read Other Recent Articles

As the complexity of software systems increases, so does the importance of rigorous testing. Traditionally, crafting test cases has been a manual and time-consuming process, often prone to human error and oversight. However, with generative AI, a new era of automated test case generation is upon us, promising to revolutionize the way we ensure software

Data is the lifeblood of innovation and technology and the need for comprehensive testing strategies has never been more critical. Testing ensures the reliability, functionality, and security of software applications, making it indispensable in the development lifecycle. However, traditional testing methods often face challenges in accessing diverse and realistic datasets for thorough evaluation. Enter generative

The purpose of Multifactor Authentication is to defeat bots. Software test automation solutions look like they are bots. All of the MFA implementations depend on human interaction. To be able to successfully automate testing when MFA is in use usually starts with a conversation with the dev team. The dev team is just as interested

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image