Skip to content

Mission

“You want to wake up in the morning and open up your computer to find it continued working on your exact work task progressing it for you automatically, perfectly aligned to your workflow and exactly how you would like it done – and that’s what being in a world of AI assistants is all about. It’s about empowering humans to significantly achieve and produce more, than we have ever been able to in the past. And I can’t think of anything more exciting than defining and making this future a reality.” – Sam Holt (L2MAC's Creator).

Making AI Assistants Directed

Building on all the incredible advancements with Large Language Models, L2MAC aims to make AI assistants that align exactly to user control over, nearly infinite task lengths, high complexity tasks, and perform highly repeatable operations akin to complex business processes. We envisage our AI assistants empowering LLMs to become the most powerful compound AI system to achieve a specific complex real-world task, such as coding an entire application's codebase, which was previously only possible by a team of professional software engineers taking months; we aim to replicate the same level of quality in minutes, generating the code from scratch, or working with an existing codebase to implement new functionality.

Creating the Most Re-usable General Purpose Task Automatic Computer LLM Framework

We believe a fully and rapidly reusable general-purpose LLM framework is the pivotal breakthrough needed to substantially increase the adoption of LLM-based assistants. The majority of previous LLM frameworks are specialized for one particular task, comparable to specialized computing machines in the early days of computing.

Continuing with the computing analogy, with the invention of a general-purpose computer, the computer architecture, framework, and tooling stack can be developed and progressed independently of the explicit program being run on it, and it can be easily re-programmed to execute any general-purpose task. We believe such a separation is crucial for the advancement of AI assistants and seek to build the most powerful LLM-based computing task framework.

Real-world Work Tasks Follow Business Processes

On the quest to tackle real-world business tasks, we believe that an AI assistant framework should be made up of the highest-performing components that can be re-used to solve general-purpose tasks. In the world of economic repeatable processes, or business processes that power businesses internationally, we believe that AI assistants should execute tasks in a fully controllable, repeatable manner, following a “prompt program”, if you will.

Prompt Programs

The LLM-automatic computer introduces the first prompt program-driven automatic LLM-based computer for solving general-purpose tasks. Progressing with such a paradigm is crucial, as it allows complex tasks to be decomposed into complex prompt-flows that can condition the underlying LLM as a general-purpose, yet limited in length powerful task solver for some human tasks. This is further augmented by tools to leverage the distinct benefits of explicitly created tools for computation or compiling and running code, providing a powerful framework that blends the strengths of LLMs with those of tools. We envisage future prompt programs to allow complex control flow patterns, such as loops, conditional while loops, for loops, and if statements, amongst other control flows, opening up a rich possibility for complex control of such AI assistants that will be needed when solving complex tasks.

Making History

We have already introduced the first LLM-based automatic computer framework and published it in a seminal paper in the competitive peer-reviewed conference of the International Conference on Learning Representations in 2024. We have open-sourced the framework and now invite contributors to advance the cutting edge with us and help make the world’s most powerful yet user-directed, fully controllable AI assistants.

Enter your email to sign up for LLM-automatic Computer Updates.

Released under the MIT License.