Case Study|EduSpark|

EduSpark: Adaptive Learning Platform with AI Tutoring

An AI tutoring platform that adapts to each student. Built with Flutter and a custom recommendation engine — improved test scores by 34% and secured a government contract.

+34%

Test score improvement

2.5x

Engagement increase

50

Schools (phase 2)

4.6/5

Teacher satisfaction

The challenge

EduSpark wanted to rethink education technology from the ground up. Their insight was simple but powerful: every student learns differently, yet every edtech platform delivers the same content to everyone. Students who grasp concepts quickly get bored. Students who struggle fall further behind. The result is disengagement — the number one problem in digital learning.

Maya Patel, EduSpark product lead, came to us with a specific goal: build a platform that adapts in real-time to each student strengths and weaknesses. Not just adjusting difficulty — actually changing the teaching approach based on how the student learns. Visual learners get diagrams. Verbal learners get explanations. Kinesthetic learners get interactive simulations.

The platform needed to work on iPads (used in schools), Android tablets (used at home), and web browsers (for teacher dashboards). Content covered math and science for grades 6-10, initially in English with Arabic localization planned for phase 2.

Our approach

We chose Flutter for the student-facing app — it gave us iPad, Android tablet, and web from a single codebase with native-feeling performance on all three. For the teacher dashboard, we built a separate Next.js web app that connects to the same Supabase backend, giving teachers real-time visibility into student progress.

The AI system has three layers. Layer 1: a content recommendation engine that selects the next question or lesson based on the student knowledge graph (what they know, what they struggle with, what they have not yet seen). Layer 2: a difficulty calibration system that adjusts problem complexity in real-time based on response accuracy and time-on-task. Layer 3: an LLM-powered AI tutor that provides Socratic-style hints when students get stuck.

The recommendation engine uses a modified knowledge tracing algorithm. For each student, we maintain a probabilistic model of their mastery across every concept in the curriculum. Each answered question updates this model. The system then selects questions that target the boundary between known and unknown — the zone of proximal development where learning is most effective.

The AI tutor: making LLMs work in education

The biggest technical challenge was the AI tutor. LLMs are prone to simply giving answers — the opposite of good teaching. We engineered a multi-layered prompting system that forces the model into a Socratic mode: it can ask guiding questions, provide hints that lead toward the answer, explain concepts using analogies, but it absolutely cannot reveal the answer directly.

We built an evaluation pipeline that tests every prompt change against 500 real student interactions. Each response is scored on five dimensions: educational value, hint quality, answer leakage (must be zero), age-appropriateness, and engagement. Only prompts that score above threshold on all five dimensions get deployed.

The tutor also adapts its communication style to the student profile. For students who respond well to encouragement, it is more enthusiastic. For students who prefer direct communication, it is concise and factual. This personalization is driven by engagement data — we track which response styles correlate with continued effort for each student.

Design for engagement, not just education

A common mistake in edtech is designing for teachers and hoping students will follow. We designed for students first. The app uses a clean, game-inspired interface with progress visualizations that feel more like a fitness app than a textbook. Daily streaks, mastery badges, and a visual knowledge map showing explored vs unexplored topics keep students coming back.

We also built a collaborative feature where students can challenge classmates to solve problems — with adaptive difficulty ensuring fair matches regardless of skill level. This social layer turned out to be the highest-engagement feature, with students voluntarily using the app outside school hours to compete.

Results and impact

EduSpark piloted across 3 schools with 450 students over one academic term. The results exceeded expectations. Students using EduSpark showed a 34% improvement in test scores compared to control groups using traditional study methods. More impressively, student engagement time was 2.5x higher than the industry average for educational apps.

Teachers reported that the real-time dashboard transformed their ability to identify struggling students early. Instead of waiting for test results, they could see which students were stuck on specific concepts that day and provide targeted help. Teacher satisfaction scores averaged 4.6 out of 5.

On the strength of these pilot results, EduSpark secured a government contract to deploy the platform across 50 schools in the 2025-2026 academic year. The contract includes Arabic localization and expansion to grades 4-12. We are currently building phase 2 with EduSpark, adding science lab simulations and a parent dashboard.

Lessons learned

LLMs in education require extreme guardrails. Unlike a chatbot where a wrong answer is annoying, a tutoring AI that gives incorrect math explanations is actively harmful. Our evaluation pipeline was the most important investment in the entire project — and it is what gave schools the confidence to deploy.

Gamification works, but only when it serves learning. We tested several engagement features during development and cut anything that increased time-in-app without improving learning outcomes. The features that survived — streaks, knowledge maps, peer challenges — all directly correlate with better test performance.

Tech Stack

FlutterNext.jsPythonOpenAISupabasePostgreSQL
Working with iHux felt like having a senior engineering team embedded in our company. They challenged our assumptions, improved our product thinking, and shipped a beautiful app.

Maya Patel

Product Lead, EduSpark