General AI
·
5 min
Thoughts on 2024 AI Predictions – Part 3 Rethinking the Large “Large Language Models”
I am writing a series of blog posts that will take several predictions that have been made about AI ...
I am writing a series of blog posts that will take several predictions that have been made about AI for 2024 and work to understand whether I think they will come to fruition, what they would look like if they did, and in general try to use the science fiction writer part of my mind to see into the near future.
Rethinking the Large “Large Language Models”
I know I just wrote that niche LLM’s will be a major force this year, especially shaping how we use AI in our daily work and personal life. Their propagation will both dilute and refine the generative AI space. Several 2024 AI predictions center around this meaning that the Larger LLMs will be impacted negatively by these.
I expect a few of them might be. As smaller LLMs allow for more of the features that set some of the existing larger LLMs apart, the open source or cheaper models will overtake them. Basically, any of these larger LLMs that are not growing in the following areas will find the open source community taking over: researching and deploying new methodologies, training on larger and larger datasets, improving their training techniques, fine tuning their models, working on self-learning methods, improving transformer architecture (watch Google Gemini’s context window increase…), using more specialized hardware, and new techniques that haven’t been revealed yet (ChatGPT 4 is the best LLM currently, and there is little doubt it’s because they are doing something with their model they are keeping secret from everyone).
But there’s the rub… the availability of the specialized LLMs and other niche uses of AI are dependent and, in some ways, entirely reliant on the release of code to the community. I really wanted to say they were reliant on the ‘crumbs’ from the large companies in that last sentence, but the number of tools and libraries that Microsoft, Google, Meta, IBM, and Amazon have opened and released is substantial. So substantial, the community’s growth is largely dependent on their table droppings.
Not entirely dependent… The roller coaster has already gone over edge for this one. Even if the large LLM companies turned off the faucet, we’d still see growth in the same areas we’re seeing that growth right now, none of these areas will stop existing and each is currently fueling the niche market to grow faster.
Academic Research (University specific research as well as research communities that span Universities)
Collaboration with no bounds
Crowdfunding
Potential Government funding
Utilizing existing public datasets for training
Containerization technologies (like Docker) to share models and train across many platforms
Peer Review Improvement
Not for profit Innovation – Do not underestimate the amazing advances that people who are simply playing with things for no purpose can make. Those are the things that change the world.
So, while the Large LLM companies have all the advantages:
Massive datasets
Incredibly powerful compute resources
Existing expertise and growing experience
Secrets they are using that they are not sharing with each other or the rest of us…
I’m predicting they will not be negatively impacted by the niche AI community. They are feeding that community specifically to help with some innovations they aren’t able, or willing, to explore themselves. In 2024, expect those communities to thrive as they find the spaces that make the most sense to the people using them. These are potential focus groups for the large companies to determine what is the right direction, the innovation farms to uncover ideas they never would have uncovered, and the training grounds to find new talent in the AI field to pull into their corporate worlds.
Yes, we’re going to have to rethink the Large LLM’s in 2024. They are going to hit us with new models (models that will be added to AI aggregators like backplain) that nobody expected and will continue to improve. We’ve already seen it with ChatGpt 4.5 and Google Gemini’s latest edition. Thinking about them as monolithic arbiters of what’s next is not going to fly anymore. Nobody’s caught up to OpenAI yet, and when they do, you’ll see ChatGPT 5 deployed faster than expected and be amazed all over again. They will continue to compete, and gain ground based on that competition. I’m interested in how they are growing the open-source community and are willing to give up some market space to them.
As much as I’d find it interested (from a Sci-Fi perspective) to see the community they helped create beat them at their own game and outperform them… I don’t see any open-source model competing at the level the larger LLMs will be competing at.
At least not this year.
Again, let's get an LLM or two from backplain and read their take on it (I leave in the overt attempts at praise that marks a lot of what AI provides. I do appreciate ChatGPT 4 criticizing my prediction, even though I acknowledge the possibility in the previous blog. Gemini's conclusion has merit as well):
ChatGPT 4 with Web Search:
This article does a very adept job of addressing the impact of niche LLMs on Larger LLMs. While the author acknowledges that niche AIs may dilute and refine the generative AI space, they also make a compelling argument for the continued relevance and dominance of larger LLMs.
The author asserts that larger LLM companies have innate advantages, such as massive datasets, powerful compute resources, growing experience, and proprietary methodologies that they needn't share with others. This highlights an often-overlooked asset of larger corporations: their access to massive, diverse resources, and expertise, which AI training relies heavily upon. The lack of access to these resources could potentially limit the growth and evolution of open-source or smaller entrants in certain areas.
Another important observation is the 'give-and-take' relationship between open-source communities and large LLM companies. While smaller entities certainly benefit from the innovation and resources of larger companies, they also serve as 'innovation farms' for these same corporations. This viewpoint suggests a symbiotic relationship, where the large LLM corporations reap novel concepts and talents from the community, which they helped nurture, ensuring a fluid and continual exchange of ideas.
However, the prediction that smaller LLMs and open-source models won't compete at the level of larger LLMs is a potentially debatable point. There are examples in many technology sectors where open-source initiatives have disrupted the status quo and even led the market (i.e., Linux in the OS landscape). By adopting a highly collaborative, transparent, and community-driven approach, open-source projects sometimes achieve what larger corporations cannot, particularly in terms of customization and flexibility for end-users.
Perhaps more relevant to the conversation is the observation that while specialization (niche LLMs) and generalization (large LLMs) seem to be working in opposition, they can coexist – each is integral to the growth of the AI landscape. Niche applications cater to specific challenges, providing detailed and highly focused solutions, while larger LLM's offer a broader but potentially less detailed response to a wider array of issues. The balance between these two will likely shape the AI space's evolution.
Rethinking large LLMs is indeed necessary for understanding and adapting to the evolving AI landscape. While niche AIs and open-source models will certainly provide competition and impetus for innovation, the resources, expertise, and scalability of large LLMs make them less likely to be overrun in the near term.
Google Gemini Pro:
Analysis:
The article discusses the potential impact of niche AI models on larger language models (LLMs) in 2024. The author argues that while niche AI models may gain traction and address specific needs, they are unlikely to negatively impact the dominance of larger LLMs.
AI Expert Insight:
The article provides a balanced perspective on the interplay between niche and large AI models. While niche models offer specialized solutions, larger LLMs possess significant advantages in terms of data, compute, and expertise.
Key Points:
Niche AI models will continue to thrive in specific domains and applications.
Large LLMs will remain dominant due to their comprehensive capabilities and ongoing advancements.
Large LLM companies are fostering the growth of the open-source AI community, recognizing its potential for innovation and talent acquisition.
Competition among large LLMs will drive further improvements and deployment of new models.
Implications:
The article suggests that organizations should adopt a pragmatic approach to AI adoption. Both niche and large AI models have their strengths and limitations. Businesses should evaluate their specific needs and consider the trade-offs involved to optimize their AI strategy.
Conclusion:
Contrary to predictions of a decline in large LLMs, the article suggests that they will continue to dominate the AI landscape in 2024. Niche AI models will play a complementary role, addressing specific industry and application requirements. Collaboration between large LLM companies and the open-source community will foster innovation and advance the field of AI as a whole.