I believe artificial general intelligence is already here in a way. It will likely not manifest as one god-like AI model, but rather as numerous specialised agents working together, both sequentially and executing specialised tasks in parallel. The orchestration of how these smaller agents collaborate will be the critical challenge that developers may solve in the next phases of AI development.

I am observing a slowdown in the improvement of capabilities of large language model (LLM) development. Given the substantial capital invested in the industry that requires returns in the coming years, companies are heavily focused on natural language applications, as this is where LLMs currently excel.

Regarding the current geopolitical situation, organisations are increasingly seeking their own private cloud solutions. Apple’s decision to facilitate LLMs locally on the Mac mini and M5 MacBooks, alongside NVIDIA’s GDX spark desktop box, exemplifies this trend towards maintaining data within proprietary infrastructure.

To advance real AI adoption in productivity work, we must ensure functionality is readily accessible, rather than restricted behind significant paywalls. Currently we are already too dependent on cloud based AI models and creating inequality between those able to pay for access and those without.