July 31st, 2023
Celebrating the Creativity of Our Community - LLM Challenge Highlights
Sonam Pankaj
Last week, we announced the winners of Rasa’s LLM Community Challenge in a live show and tell on our YouTube channel. We had over 100 registrations from 13 different countries and over 30 amazing applications of LLMs in Rasa Framework. It was incredible to see the range of ideas presented by our community, from simply handling fallbacks to prompt management. Our developer community presented incredible solutions to real world problems leveraging conversational AI.
Here are some highlights of the show and tell:
Community has always been in our DNA and it was great to see so many experiments happening with LLMs. We are always excited to see what the community is up to, especially with this new wave of generative AI. During our show and tell, we introduced our jury and discussed the following criteria that each project was evaluated against:
- Concept innovation
- Technical implementation
- Ease of use
Our favorite part of the show and tell was being able to announce the three winners live and give them the opportunity to showcase their amazing project demos to the world. For those who may have missed the live stream, we have posted the full video on our YouTube channel here!
H.Y.D.E (How you doing everything) by Nicholas Hacult was chosen as the winner of Rasa’s LLM Community challenge!
H.Y.D.E is a virtual personal assistant designed to run locally on your machine and help with simple tasks. It offers various text-based abilities, including text generation, paraphrasing, summarization, and question answering. This functionality is achieved through multiple pre-trained Language Models (LLMs) from Hugging Face, with each model specializing in a specific task to optimize memory usage.
We were impressed by the personalized assistant’s behavior responses and even more impressed with its ambition of running entirely locally without being dependent on any cloud services. In our opinion, the most amazing part of the assistant is the way each model specializes in one specific task to optimize memory usage - it’s not one model for all the tasks, rather it's a heterogenous LLMs approach.
Check out this project here and congrats again to Nicholas for winning the challenge!
The runner up for the challenge was LLM Powered Bot Responses & Prompt Management by Ishara Dissanayake
The second winner also solves a deep problem related to LLM i.e. prompt management. It leverages OpenAI GPT-3 completions to generate responses from structured data and rephrase existing bot responses that it has seen during the training process. This prevents developers from having to hard code response variations and makes extracting data from responses a breeze. LLMs take care of the rest and construct a reliable natural answer based on both the user's input query and the base response in the bot's domain.
Thus, the overall solution makes the conversational AI assistant smarter by enabling dynamic response generation that will adapt to the user queries. It also makes developers' lives easier by eliminating the need to construct and hardcode different response variations and makes the prompt management a breeze with the prompts module and the llm_prompts.yml file which assist to maintain all prompts in a single place.
Check out this project here and congrats again to Ishara for being awarded second place in the challenge!
The third place award for the challenge was Roamwise by Suman Das
RoamWise is a travel planning application designed to enhance your exploration and make your journeys more seamless. With RoamWise, you can unlock the full potential of your travels by providing intelligent recommendations, comprehensive itineraries, and convenient travel management tools. It uses Rasa,langchain and generative AI and is well integrated with slack.
Some key features of Roamwise are Itinerary creation, destination discovery, and context based recommendations.
Check out this project here, and congrats again to Suman for being awarded third place in the challenge!
What’s next for Rasa after this challenge?
Rasa’s aim has always been to simplify and optimize developer’s work in conversational AI. Seeing innovation happening in the field also sparked us to investigate and experiment with the use of LLMs natively in Rasa Framework, something we will announce in the future.
We also understand that the community is excited about LLMs in Rasa, so we will be highlighting these projects in our Community Showcase for all to see. If you are working on something that’s innovative and interesting and want to share it with the world, we encourage you to submit it here.
Thank you again to everyone who participated in our LLM Community Challenge! We appreciate all of the time and effort that went into each project and it was amazing to see everyone’s brilliant ideas. Our first community LLM challenge was a success and make sure to stay tuned for upcoming collaborations and innovation updates from Rasa.