Let's talk
Don’t like forms?
Thanks! We'll get back to you very soon.
Oops! Something went wrong while submitting the form.

Hackathon

Learn how to build an LLM application with open source

An invitation to hack on LLMs with Fuzzy Labs

It all started 12 months ago…

After we had initially calmed down from playing with ChatGPT, we wanted to see if we could build something similar using open source. We called it MindGPT.

While the end state was fairly primitive it a) proved the concept and b) generated a lot of interest. This gave us the motivation to see how much further we could push open source innovation with LLMs.

So for 12 months we’ve written blogs, LinkedIn posts and recorded videos about our work on productionising LLMs—going DEEP into things like prompt engineering, model inference, guardrails, cyber security and monitoring.

Recently we’ve taken all of this, plus what we've learned from implementing this on customer projects, and built our own reference architecture for an open source LLM stack.

We call it ‘Fuzzability’ - yep it is not a real word, Joe made it up.

We think it’s pretty special. But it has occurred to us that checking just how ‘special’ it is, with some Fuzzy Labs’ friends, might be a good idea.

So, on the 2nd July we’re inviting you to spend the day with us building on top of our Fuzzability stack to see what you think of it. Together we’ll cover everything from provisioning infrastructure and data pipelines, to model serving and prompting.

You just need to bring an idea and some data, and we’ll pair you up with one of our awesome engineers to build it together. We want you to have a memorable time spent with us and hope that we can inspire each other to push what’s possible with open source LLMs.

We really want to hear what you think about what we’ve built. Our tooling is opinionated, and we know you’ll have your own views, insights, and experiences that differ from ours. We want you to challenge us on our assumptions, and tell us what you’d do differently. Be brutally honest with us.

We’ll provide everything you need: desk space in our office at DiSH, break-out areas to relax or discuss the deep questions of the day (like mathematically-inspired socks or whether the Ballmer peak is real), and all the food and kombucha you need to keep those creative juices flowing. And don’t worry, it’s not yet another pizza event; healthy food and no afternoon slumps.

It's all in Python, so obviously we'll be doing duck-typing

How do I sign up?

The hackathon will be held on the 2nd July at our office in Manchester. Click below to complete our sign-up form, and we'll be in touch soon.

Sign up

Frequently Asked Questions

🔨​ What can I build?

Our tooling is set up to support a wide set of RAG style applications. RAG stands for retrieval augmented generation, and essentially it encapsulates any application where you’ve got some natural language data, and you want to ask questions in plain English (or other language!) about that data.

For some inspiration, you might think about building:

➡️​ A chatbot to answer your customer support questions.

➡️ A tool to summarise long legal documents.

➡️ A game with AI-generated dialogue.

➡️ An educational tool to teach somebody a language or a skill

⁉️ What is RAG?

Retrieval Augmented Generation (RAG) is a technique where a large language model is combined with a dataset, to answer natural-language questions using that data.

For example, suppose your company has a lot of documentation about your products, and you want to make it easier for people to find information within the documentation. An LLM configured for RAG would allow you to answer questions like

➡️ “How do I install Product XYZ?”

➡️ “Which product is recommended if I want to BLAH”

How it works is quite interesting. First, your data is ingested into a vector database. This is a type of database that is optimised to find content based on semantic similarity. When a user asks a question, we first search the database for all relevant content that might help in answering the question, and then we pass the question, and related content, through to the LLM, which comes up with an answer.

​​📚​ What skills will I need?

To get the most out of the hackathon, you should have 2-4 years of experience in Python and machine learning, with some experience building LLM-based systems, plus some prior experience with any of the following: AWS, Azure, Google Cloud, Kubernetes, Terraform.

💾​ What data will I need?

To take part, you’ll need to bring a dataset of your own. Generally speaking, we’re looking for textual data, for instance PDFs, Word documents, JSON. After you sign up, we’ll have a conversation about your specific data and use-case so we can ensure you'll get the most out of the hackathon. Your data will be kept secure within an AWS environment that we'll provision especially for the hackathon, and all data will be deleted afterwards.

🔓​ Will my data be secure?

Yes. Ahead of the hackathon we'll set you up with a dedicated AWS environment, segregated from other attendees, and this is where your data will be held. We’ll take measures to secure this environment, and the environment along with all data will be deleted afterwards.

☁️​ What cloud environments do you currently support?

Fuzzability is currently geared specifically towards use in AWS, though other cloud environments may be added in the future.

For convenience and ease of setup, during the hackathon everything will be deployed to an AWS environment that we configure and manage, however the tool itself is designed for deployment to any AWS account, and if you’d like to run this on your own AWS afterwards, we’ll be happy to support you with that.

⚒️​ Will you provide the AWS environment?

Yes. For the hackathon, we’re providing each attendee an AWS environment to work in.

🏗️​ Can I use what I’ve built afterwards?

Yes, you can take it home and keep using it afterwards. But to do this, you’ll need to set it up in your own AWS environment - something we’ll be more than happy to support you with afterwards.

To take part, we'll ask you to sign an agreement to not re-sell the Fuzzability tooling or use what you've built commercially. Afterwards, however, we'll be more than happy to discuss future options including commercial use.

⏱️​ What do I need to prepare beforehand?

The most important preparation is formulating your idea prior to the hackathon and preparing a dataset to bring along. We’ll work with you on everything else, from taking your data and getting it into the system to generating and tweaking responses from the LLM.

Additionally, you should bring a laptop that you can install software on and use for Python development. You can use MacOS, Linux, or Windows. We’ll share a full list of dependencies and versions ahead of time and help you get set up before the hackathon.

💷​ What’s the cost for me?

There’s no cost for attending. Additionally, for everyone that comes along, we’ll make a donation to Guide Dogs UK.

🧑🏻‍🤝‍🧑🏻​ How many places are there?

For this event, there's 10 places available, and these places can be filled by individuals or small teams (up to 3 per team). But don’t worry if you don't get shortlisted this time, we’re planning on holding more in the future.

🤝🏻​ Can I take part as a team?

Yes. Individuals and small teams are equally welcome, with the maximum team size at 3.

🍕​ Will there be food?

Forget about greasy hackathon food, we’ll be serving delicious and healthy food to keep the creative juices flowing and to avoid afternoon slumps, and we’ll cater for any dietary requirements.

🧭​ Where is it?

It’s taking place at our office:

Fuzzy Labs
GM Digital Security Hub
1 Lincoln Square, Manchester
M2 5LN

The entrance to the office is on Lloyd Street, and is the same door as the entrance to the Manchester Register Office.

Google Maps link.

✉️​ How can I contact you?

If there's anything else you'd like to ask or assistance you need, please feel free to contact us via hackathon@fuzzylabs.ai.