Just Start: What Building a Roulette App Taught Me About AI, Uncertainty, and the Fun of Just Having a Go with AI!

Every so often, I like to step away from formal projects and build something purely out of curiosity, with no deadlines, no client, just a structured challenge to explore an idea.

So whilst on a recent break, I decided to spend 3 hours building an AI-powered roulette prediction web app.

I didn’t want to beat the table or build a commercial product, it was simply to follow a process, from idea to working prototype, and see what I could learn about AI along the way.

It’s the same mindset we teach at Code Nation: start small, think clearly, test fast, and learn through doing.

Why Roulette?

Roulette is unpredictable, that’s what made it interesting. It combines pattern, chance, and human behaviour, which makes it a perfect playground for testing how AI can learn from messy, uncertain data.

Each spin tells a story:

  • Is there a rhythm in the dealer’s throw?
  • Are certain numbers clustering?
  • How should the betting coverage spread adapt?


This was about exploring randomness and seeing how well an AI system could learn from it.

The Process

I built an app in Python, but GPT supported almost every step of the journey, and not just the code. Here’s the flow I followed;

  1. Framing the question – I asked GPT to help define what I was actually trying to test: “Can bias or rhythm in roulette be modelled in a way that produces adaptive learning and propose spread coverage?”
  2. Researching the background – GPT sourced and summarised academic studies, including Small & Tse (2012) and Small (2008), which explored measurable bias in roulette systems.
  3. Exploring the market – We looked at existing “AI roulette” apps and identified what they lacked (contextual learning and dealer specificity).
  4. Iterating the problem statement – Together we refined the logic, defining the inputs, outputs, and feedback loops.
  5. Creating the statement of work – GPT helped me write a mini spec to hand over to my “GPT coding agents,” outlining modules for data input, adaptive logic, and risk presets.
  6. Building the code – The coding agents wrote the Python logic to calculate neighbour spreads, adjust ranges, and apply dynamic risk levels.
  7. Testing and validation – My “GPT testing agent” then reviewed the code, tested scenarios, and helped refine coverage logic before I trialled it manually.

 

It was a structured, collaborative process of human reasoning supported by AI execution.

The Science

There’s genuine research behind the concept. Studies like Small & Tse (2012) and Small (2008) showed that even in systems designed to be random, physics and human influence introduce measurable bias.

I wasn’t trying to recreate their experiments, I simply used the same mindset: apply structure, gather data, and observe how adaptive logic behaves when the system itself is uncertain.

What Makes It Different

Yes, there are already “AI roulette” apps out there. Most ask for the last ten numbers and generate a guess. This experiment took a different path, it treated every spin as new data and every dealer as unique.

The system wasn’t built to guess; it was built to learn, adjusting dynamically to context and feedback.

That same principle underpins how we use AI in business: observe, learn, and adapt continuously.

What I Learned

I wanted to have fun, but within a structured process – Done.

I wanted to see what would happen if I used GPT as both collaborator and coach, from defining the question to testing the solution, and I was Impressed with the results, switching from using GPT for task execution to coach.

Can AI be used to create custom tools in a short period of time? The answer was yes, and in addition, it reminded me that complex systems are best understood through experimentation and that AI development is far more accessible than most people think.

Conclusion

At Code Nation, we use projects like this to show how applied AI can be approached with curiosity and rigour; no big team, no long lead times, just a good question, a structured approach, the patience to follow it through and a switch in mindset from task creation to coaching.

If you’ve been thinking about trying something similar, like building a small web app, testing an AI model, or just exploring data in a new way, my advice … just Start, stay curious, switch to being coached and enjoy it.

If you’d like to understand the structured approach we use at Code Nation for projects like this, I’d be happy to share further and if you’re interested in the sessions we run in these areas, check our course catalogue at www.wearecodenation.com.

David Muir
Founder & Managing Partner
Code Nation