General AI

·

7 min

General AI

·

7 min

General AI

·

7 min

Thoughts on 2024 AI Predictions – Part 5 Prompt Engineering Still Matters

Thoughts on 2024 AI Predictions – Part 5 Prompt Engineering Still Matters

Thoughts on 2024 AI Predictions – Part 5 Prompt Engineering Still Matters

I am writing a series of blog posts that will take several predictions that have been made about AI...

I am writing a series of blog posts that will take several predictions that have been made about AI...

I am writing a series of blog posts that will take several predictions that have been made about AI...

I am writing a series of blog posts that will take several predictions that have been made about AI for 2024 and work to understand whether I think they will come to fruition, what they would look like if they did, and in general try to use the science fiction writer part of my mind to see into the near future. 

Thoughts on 2024 AI Predictions – Part 5 - Prompt Engineering Still Matters:

It shouldn't surprise me when people predict prompt engineering skills are going to be unnecessary and are a waste of your time to learn. "You can just ask the AI to write the best prompt for you and it will handle the rest." "The AI companies are improving the AI's in a way to make even bad prompts get a good response." At least the latter has a little grain of truth in it. Of course, not knowing the order of calculation on a non-scientific calculator is probably OK as well.

My education, from a toddler to now disagrees with the notion that learning a skill is a bad thing. With the major tech players changing everything on a whim, every engineer out there (Infrastructure, security, mechanical) knows that as new products or upgrades come into the company, having experience on older or competitive versions means you can learn the tool easier and quicker than if you never touched a previous version. The more you learn how a tool works, the better you will be at using that tool. As these AI tools evolve and grow and make prompting easier, and I'm not denying that will occur, there will be people who use it and people who master and have a profound ability to pull good and true responses from it. Guess which user will have an understanding of prompt engineering...

I am not downplaying those people who don't have time to learn AI. I expect their ongoing use of prompts to be consistent with how they talk to alexa, siri, or google. Terse commands with minimal thought behind them. They will get the results they are asking for. "Alexa, morning news", "Siri, set an alarm for 15 minutes", "Google, turn lights on". And for them, AI will be just that, a tool for basic tasks. You're starting to see it already with people who are not devoting even a little time to it. It is a glorified search engine for them and one that is only partially good at times.

They have jobs to do and lives to lead, not every worker will be able to spend time away from their job to become proficient at prompting an AI that may or may not improve their productivity. AI companies and companies like backplain are addressing this group of users with prompt helpers. That tech will continue to advance and you'll have generic prompts flavored in the backend with some of the common prompt "aids" that are out there. 

Even as those LLMs and the interfaces to them get better at recognizing what you are trying to prompt, knowing how they react to different prompts is the difference between an average response and the response you wanted. More importantly, it helps prevent hallucinations.

backplain is a great place to test this with its Multi Chat feature where you enter a single prompt and get 12 or so LLM's to respond at the same time. Ask them a math question and see how varied the answers are among ChatGPT, Llama, Gemini, etc. Then have someone who knows how to prompt walk the LLM's through a chain of thought model, teaching them how they should handle that math question and you'll find out really quick which LLM hallucinates. 

But it goes beyond math, sometimes it's the simplest of things that should work that don't.

I was going through a trivia challenge at work last month. I had to send out the trivia questions and compile the answers. Well, taking several response of multiple choice answers, calculating them and getting the list of winners seemed insanely boring to me.

Boring and repetitive work? Let's go AI!

So, I took the answer key, and told the LLM to use that to grade the following answers and then proceeded to put the name of the person and their entry in. Repeat for every user and then asked for a tabulated list with the name and points from highest to least. I made it simple because, how can you mess that up? I mean, when I say the answer key is B C D B A C E A B D, and then give them results like "Greg's answers: B D D B A C E A B D" I expect it to tell me Greg had 9 out of 10 correct. and it got most of them right. But I was told Greg had 8 out of 10 correct (that's not his real name, also nobody got hurt during this trivia competition).

It had gone through each of Greg's answers, got #2's incorrectness correct, but also indicated that number for was "incorrect with a B, the correct answer is B". Two things here. 1. I can't make that up, that phrase is beautiful. and 2. It was SUCH AN EASY prompt and my expectations were low enough to think AI couldn't mess up an ANSWER KEY and a comparison task. I did the unthinkable, I took the results as accurate without reviewing them. From a backplain perspective, I was confident enough that such a simple task would be easy that I used the single chat versus the multi chat. Why would I have to see which AI had the best answer for an answer key comparison? Greg had tied for first, but the announcement was that someone else, let's call her Francine, had won.

If you don't see where I am heading with that, once I took time prepping the prompt effectively and training the LLM on how to use the answer key it was accurate consistently. There are chatbots and add-ins that have builtin prompts for these types of scenarios, not denying that. I just know from experience and as someone who knows how tech companies work, that there will be a time when an AI gets a new version or gets an upgrade of its training or is training on live data and those old prompts stop working. Or they work differently, or a new prompting technique increases accuracy, or, or, or... The one constant is that things will continue to change and the more you know how a tool works at a fundamental level the easier it will be to learn about its updated functionality.

Keep learning.

NOTE: I prompted the LLM's in a backplain multi chat to be an AI advisor and to analyze my post above. I freely acknowledge that Llama 2 is right here. There will be more intuitive ways to interact with AI like the verbal prompting occurring right now. I still contend that learning how to effective prompt even with the more intuitive ways will still make you stand out from others. I go back to the terse and shortened commands used for the last generation voice assistants as an example. The more you use it and know how it reacts to specific prompts, the more you know what is effective.

Llama 2 70B' Opinion.

I partially agree with the author's assertion that prompt engineering will still be a valuable skill in the future. The author presents a compelling argument that understanding how AI works at a fundamental level will become increasingly important as the technology evolves. He also effectively illustrates the importance of prompt engineering through personal anecdotes, such as the trivia challenge, which highlights the need for human oversight and the importance of understanding how AI works at a fundamental level.

However, I also believe that the author downplays the potential advancements in AI technology, which could potentially make prompt engineering less relevant. The author acknowledges that AI companies are constantly updating and improving their technology, but he doesn't fully consider the possibility that these advancements could make prompt engineering unnecessary.

For instance, the author mentions that chatbots and add-ins have built-in prompts for certain scenarios, which could potentially reduce the need for prompt engineering. Additionally, the author notes that AI companies are working to improve the ability of their AI models to recognize and respond to natural language, which could also reduce the need for prompt engineering.

Furthermore, while the author emphasizes the importance of understanding how AI works at a fundamental level, he doesn't fully consider the potential for AI technology to become more user-friendly and accessible. As AI technology advances, it's possible that users will be able to communicate with AI models in a more natural and intuitive way, without the need for complex prompt engineering.

In conclusion, while I agree with the author that prompt engineering will still be a valuable skill in the future, I believe that he underestimates the potential advancements in AI technology, which could potentially reduce the need for prompt engineering. I think it's important to consider both the potential advancements in AI technology and the importance of understanding how AI works at a fundamental level when assessing the future relevance of prompt engineering.

Gemini Pro's Opinion:

The author asserts that prompt engineering skills will remain crucial for those who wish to extract the full potential of AI tools, even as AI becomes more user-friendly. I agree with the author's assertion and believe that prompt engineering will continue to play a vital role in the effective use of AI systems.

The Enduring Importance of Prompt Engineering

While it is true that AI companies are working to improve user interfaces and reduce the need for deep technical knowledge, the underlying principles of prompt engineering will remain relevant for several reasons:

  • The Complexity of AI Models: AI models are becoming increasingly complex and sophisticated. To harness their full capabilities, users need to understand how to craft effective prompts that align with the model's strengths and limitations.

  • The Evolving Nature of AI: AI is constantly evolving, with new models and techniques emerging regularly. Those who have a strong foundation in prompt engineering will be better equipped to adapt to these changes and optimize their interactions with AI systems.

  • The Need for Precision and Control: In many applications, it is essential to have precise control over the output of AI models. Prompt engineering allows users to fine-tune their prompts to achieve the desired level of accuracy, specificity, and creativity.

The Role of Prompt Engineering Tools and Helpers

Prompt engineering tools and helpers can certainly make it easier for users to generate effective prompts. However, these tools are not a substitute for a deep understanding of prompt engineering principles. They can assist users in certain tasks, but they cannot fully replace the need for human judgment and expertise.

Conclusion

In conclusion, I believe that the author is correct in asserting that prompt engineering will remain an important skill for those who wish to maximize the benefits of AI. While AI companies are working to simplify user interfaces and make AI more accessible, a fundamental understanding of prompt engineering will continue to be essential for unlocking the full potential of these powerful tools.

Continue reading