Are LAMs the future of Automated QA?

LAMs are a new system that "allows for the direct modeling of the structure of various applications and user actions performed on them without a transitory representation, such as text" Could they revolutionise automated software QA?

Are LAMs the future of Automated QA?

One of the big winners from this years Consumer Electronics Show (CES) was the Rabbit R1, a handheld device that uses AI to automate tasks like playing music and booking travel.

But as I read more about how it works under the hood, I learned they don’t use the LLMs we’ve all become familiar with from ChatGPT and Github copilot, but instead rely on what they’re calling Large Action Models, or LAMs for short.

Essentially they train the model by performing a task like booking an Airbnb, then the model can replicate the task based on the training data, using I assume some kind of headless browser.

And when I saw this, my mind immediately went to “surely you could use this for QA”. Train the model once using your happy path, then give it all your test cases as prompts. and funnily enough, in this scenario, one of the bigger flaws in LLMs know as hallucinations (basically making up stuff), could actually be a strength because it will start to use your software in unexpected ways.

Of course, this could end up being more noise than signal, but it's interesting to see the continued application of LLM style technology and considering how it applies to software development.