Multi-model

·

4 min

LLM Wars (and how to make sure you win)

Since the dawn of the digital age there have been battles that span decades fought for the bettermen...

Since the dawn of the digital age there have been battles that span decades fought for the betterment of consumers. At times there were clear victors; VHS over Betamax, Blu-ray over HD-DVD (that one hurt). There were others where there wasn’t a clear victor or the war is still waging, cable versus streaming services, streaming service versus other streaming service, physical copies of music versus digital copies, console versus PC, etc. Generative AI models are taking the battlefield now.


Obviously, the digital wars from the past will have varying similarities to what we’re seeing now. We are fighting over how well new LLM’s handle tests compared to High school and College students and other models. Each one of the Large Language Models are described by the amount and quality of the data used for training them and whether they can integrate with real time data. There are new models that compete with the “old” models that score just as well but are trained on fewer data points with more efficient training methods…


The LLM wars in that sense are like the previous digital scenarios. This AI has a larger memory so it’s better for remembering my characters and what they would do as I write my novel. This AI is trained on millions of novels, it will be better for developing my characters. The sound quality of this vinyl record is superior to that poorly encoded mp4… 


It’s the same song. There are obviously differences in the outputs, but both services will provide similar responses.
And I know, it’s not always a like for like comparison, that’s where the LLM wars are different from the past digital battles. The model that was trained on billions of lines of code will be better for coding than a model that was trained on British literature. Which puts us squarely in the console versus PC camp. Both AI services will be useful for different purposes and each will have moments where they are superior to the other.


Will there come a time when the generative AI models become so fragmented that we are stuck with microservices and specialization to the extreme? Possibly, and in several instances, yes. That is not necessarily a bad thing though. There are times when specialization can give you increased efficiency; this AI has one task and does it extremely well. It could also be a part of a modular service that integrates into other Modular AI services to create a custom solution that wouldn’t have been possible without that segmentation. It might lead to cost efficiency as well with smaller AI’s comes less resource usage.

Source


It's not all rose colored glasses, though. Over specialization increases complexity and troubleshooting. Communications across fragmented/specialized AI’s might lead to resource intensive inter-service chatter. Then there are security concerns with multiple AI’s handling sensitive data or redundancy in services that come even with specialization. Potential increases in latency. Inability to maintain consistent results…


There is no doubt a future where these generative AI’s bring both sides of that specialization coin. But there is also no doubt that we are not in a world where there will be a clear winner here. Absolutely, there will be AI’s that die, ChatGPT’s recent turmoil shows that even those in the lead have the possibility of becoming just another HD-DVD injury on the digital battlefield.


We are in a place where there will be AI systems as varied as PC’s, Mac’s, and Linux (RIP OS/2…), each with their own useful features. ChatGPT, PaLM 2, Llama 2, Claude, and other open source ones that are and will continue to appear on the battlefield. All of them are useful for different tasks. I expect the future will include the use of all of these (give or take the ones whose company or model stumbles) in varying levels of use and features. Even specialization and fragmentation will leave these types of generative AI models that have started infiltrating the corporate world in place.
Does that mean you need to limit yourself to only one of the several LLM’s that are available? You don’t want to put yourself or your employees in the position where they are provided PaLM 2 only to find out that ChatGPT comes out with a feature that would make were business grow exponentially. In the LLM wars, don’t be left behind with a potential defeated LLM when you have a better option.


Protect yourself with the ability to get a single prompt answered by multiple LLM models. Rather than work through a prompt with ChatGPT 4 and then go to PaLM 2 and Llama 2 to see if the answer is similar, use backplain to aggregate these existing and future LLM responses at once. Not only will you get the new features and varied advantages from each model’s unique training data and training methods, but you’ll also be able to quickly identify any potential hallucinations that require further research. Every news cycle has a new research group or writer who is caught using data that an AI hallucinated as if it was true. Getting an aggregator like backplain means you can see multiple LLM responses at once and, if there are hallucinations, they will be evident immediately alongside the other AI’s that are not hallucinating.


Add to that the security of compliance filters, alerts, reports, and chat histories for all the LLM’s you use, and you have an enterprise level solution that you can feel confident giving your users. Protect yourself, your company, your reputation, and your data with a solution that aggregates LLM’s the right way. Then, it doesn’t matter which LLM is right for you and your employees or if some LLM’s lose and are replaced with others. You’ll have access to all of the once and future LLM’s in one tool, backplain.

Ready to get started?

Take the first step to make AI work for you.

Ready to get started?

Take the first step to make AI work for you.

Ready to get started?

Take the first step to make AI work for you.

Continue reading