Build a ‘micro’ dining app in 7 days: a runnable full‑stack template with AI prompts
starter-kittutorialAI

Build a ‘micro’ dining app in 7 days: a runnable full‑stack template with AI prompts

ccodenscripts
2026-01-21 12:00:00
11 min read
Advertisement

Ship a dining micro app in 7 days: minimal Flask/Express backends, Vite React frontend, and reusable LLM prompt patterns for rapid prototyping.

Hook: Stop wasting time reinventing small apps — build a micro dining app in 7 days

Decision fatigue in chats, last-minute dinner plans, and juggling group preferences are real developer and product pain points. If you want a working, reusable template you can ship in a week, this is the sprint plan: a minimal backend, a lightweight React frontend, and a small set of LLM prompt patterns that do the heavy lifting for recommendations. This article gives a runnable, production-practical template that recreates the spirit of Rebecca Yu's week-long dining app experiment — updated for 2026 trends like mature RAG patterns, edge functions, and on-device LLM options.

Why micro apps matter in 2026

By late 2025 and into 2026, micro apps — personal, focused apps built to solve one narrow problem — are mainstream. Lower friction cloud infra, improved LLM APIs, and local-first compute make it possible for a single developer to go from idea to running app in days. Rebecca Yu's Where2Eat is a perfect example of this trend: a focused dining recommender that solves a real pain for a small user group. The goal here is not to build a giant product — it is to ship quickly, iterate, and reuse the template for future micro apps.

"Micro apps are fast to build, cheap to run, and perfectly aligned with personal workflows. — trend summary, 2026"

What you get: a one-week sprint template

This article gives:

  • A 7-day sprint plan with daily deliverables and acceptance criteria
  • Two minimal backend options you can swap: Flask (Python) and Express (Node)
  • A lightweight React frontend bootstrapped with Vite
  • Runnable examples for calling LLMs and simple RAG with pgvector or local file embeds
  • LLM prompt patterns and a prompt library you can reuse
  • Ops-light deployment options and no-ops strategies for maintenance

Sprint plan: 7 days to a useful micro dining app

Keep each day focused and small. The goal is a usable MVP by day 4 and a polished version by day 7.

  • Day 1 – Define scope & data model: users, restaurants, preferences, sessions. Start with in-memory storage.
  • Day 2 – Minimal backend: API endpoints for restaurants, votes, and recommendations using a simple LLM call.
  • Day 3 – Frontend skeleton: Vite + React, basic UI to list restaurants and capture votes/preferences.
  • Day 4 – LLM integration: wire a recommendation endpoint, test prompt patterns, validate outputs.
  • Day 5 – RAG and personalization: add a tiny vector store or local embedding cache for context.
  • Day 6 – Polish & security: API keys, env vars, CORS, input validation.
  • Day 7 – Deploy & document: deploy backend to an edge or serverless platform and publish frontend to Vercel/Netlify.

Minimal backend: Flask version (runnable)

Use Flask if you prefer Python or already have Python tooling. This version is intentionally tiny — it demonstrates how to expose an API that calls an LLM for recommendations.

from flask import Flask, request, jsonify
import os
import requests

app = Flask(__name__)

# In-memory store for prototype
RESTAURANTS = [
  {'id': 1, 'name': 'Miso Corner', 'tags': ['japanese', 'ramen']},
  {'id': 2, 'name': 'Taco House', 'tags': ['mexican', 'casual']},
  {'id': 3, 'name': 'Veggie Delight', 'tags': ['vegetarian', 'healthy']}
]

SESSION_VOTES = {}  # session_id -> list of restaurant ids

OPENAI_KEY = os.getenv('OPENAI_API_KEY')
LLM_ENDPOINT = os.getenv('LLM_ENDPOINT') or 'https://api.openai.com/v1/chat/completions'

def call_llm(prompt):
    headers = {
        'Authorization': f'Bearer {OPENAI_KEY}',
        'Content-Type': 'application/json'
    }
    payload = {
        'model': 'gpt-4o-mini',
        'messages': [
            {'role': 'system', 'content': 'You are a helpful dining recommender.'},
            {'role': 'user', 'content': prompt}
        ],
        'max_tokens': 200
    }
    r = requests.post(LLM_ENDPOINT, headers=headers, json=payload)
    r.raise_for_status()
    return r.json()['choices'][0]['message']['content']

@app.route('/api/restaurants')
def list_restaurants():
    return jsonify(RESTAURANTS)

@app.route('/api/vote', methods=['POST'])
def vote():
    data = request.json
    sid = data.get('session')
    rid = data.get('restaurant')
    SESSION_VOTES.setdefault(sid, []).append(rid)
    return jsonify({'ok': True})

@app.route('/api/recommend', methods=['POST'])
def recommend():
    data = request.json
    sid = data.get('session')
    votes = SESSION_VOTES.get(sid, [])
    prompt = f"Given restaurants {RESTAURANTS} and votes {votes}, recommend top 3 places and explain why for this group."
    answer = call_llm(prompt)
    return jsonify({'recommendation': answer})

if __name__ == '__main__':
    app.run(debug=True, port=5000)

How to run:

  • python -m venv .venv && source .venv/bin/activate
  • pip install flask requests
  • export OPENAI_API_KEY='your_key' && python app.py

Notes: swap the LLM endpoint and model to your preferred provider. For production, move the in-memory store to a tiny DB (SQLite or Postgres) or a serverless key-value store.

Minimal backend: Express version (runnable)

If you prefer Node.js, this Express example is equivalent. It demonstrates a single recommendation endpoint and minimal state.

import express from 'express'
import fetch from 'node-fetch'

const app = express()
app.use(express.json())

const RESTAURANTS = [
  {id: 1, name: 'Miso Corner', tags: ['japanese','ramen']},
  {id: 2, name: 'Taco House', tags: ['mexican','casual']},
  {id: 3, name: 'Veggie Delight', tags: ['vegetarian','healthy']}
]

const sessionVotes = {}
const OPENAI_KEY = process.env.OPENAI_API_KEY
const LLM_ENDPOINT = process.env.LLM_ENDPOINT || 'https://api.openai.com/v1/chat/completions'

async function callLLM(prompt) {
  const res = await fetch(LLM_ENDPOINT, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${OPENAI_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'gpt-4o-mini',
      messages: [
        {role: 'system', content: 'You are a helpful dining recommender.'},
        {role: 'user', content: prompt}
      ],
      max_tokens: 200
    })
  })
  const json = await res.json()
  return json.choices?.[0]?.message?.content || ''
}

app.get('/api/restaurants', (req, res) => res.json(RESTAURANTS))

app.post('/api/vote', (req, res) => {
  const {session, restaurant} = req.body
  sessionVotes[session] = sessionVotes[session] || []
  sessionVotes[session].push(restaurant)
  res.json({ok: true})
})

app.post('/api/recommend', async (req, res) => {
  const {session} = req.body
  const votes = sessionVotes[session] || []
  const prompt = `Given restaurants ${JSON.stringify(RESTAURANTS)} and votes ${JSON.stringify(votes)}, recommend top 3 places and explain why for this group.`
  const answer = await callLLM(prompt)
  res.json({recommendation: answer})
})

app.listen(5000, () => console.log('Server listening on 5000'))

How to run:

  • node --version >=18
  • npm init -y && npm install express node-fetch
  • export OPENAI_API_KEY='your_key' && node server.js

Frontend: lightweight React (Vite) skeleton

Use Vite for a fast dev loop. This tiny UI lists restaurants, lets participants vote, and calls /api/recommend.

import {useState, useEffect} from 'react'

export default function App(){
  const [restaurants,setRestaurants] = useState([])
  const [session] = useState(()=>Date.now().toString())
  const [rec, setRec] = useState(null)

  useEffect(()=>{fetch('/api/restaurants').then(r=>r.json()).then(setRestaurants)},[])

  const vote = async (id)=>{
    await fetch('/api/vote',{method:'POST',headers:{'Content-Type':'application/json'},body:JSON.stringify({session,restaurant:id})})
  }

  const ask = async ()=>{
    const r = await fetch('/api/recommend',{method:'POST',headers:{'Content-Type':'application/json'},body:JSON.stringify({session})})
    const j = await r.json()
    setRec(j.recommendation)
  }

  return (
    

Where2Eat (micro)

    {restaurants.map(r=> (
  • {r.name} {r.tags.join(', ')}
  • ))}
{rec &&
{rec}
}
) }

Run with:

  • npm create vite@latest my-app --template react
  • Replace src/App.jsx with the code above
  • npm install && npm run dev

Prompt patterns that matter (reusable)

LLM prompts are the heart of the micro app experience. Here are patterns that worked well in experiments through late 2025 and are standard in 2026.

  • Context + instruction: give the model the dataset, user votes, and a clear instruction. Keep context compact.
  • Persona + constraints: tell the model it's a concise recommender and to output JSON for easy parsing.
  • Few-shot with edge cases: add 1–2 short examples for ambiguous inputs.
  • Function calling / tool spec: when using providers that support function calling, declare expected schema so you get structured output.
  • Safety & privacy hints: ask the model to avoid leaking PII and to provide nullable fields when data is missing.

Example prompt (system + user):

System: You are a concise dining recommender. Always return JSON with keys: choices (array of {id,score,reason}).

User: Given restaurants: [{'id':1,'name':'Miso Corner','tags':['japanese','ramen']},...] and votes: [1,2], recommend top 2 places and explain why.

Example expected JSON response (for parsing):

{
  'choices':[{
    'id':2,
    'score':0.92,
    'reason':'Matches majority preference for casual mexican and open late.'
  },{
    'id':1,
    'score':0.67,
    'reason':'Group member likes ramen.'
  }]
}

Adding tiny RAG for context-aware suggestions

For personalization beyond basic votes, add a tiny embedding store. In 2026, pgvector and lightweight vector stores like Weaviate or on-device LLM + local embeddings are common. For a micro app, try one of these patterns:

  • Local file embeds: compute embeddings for restaurant descriptions and store them in a JSON file. On recommend, compute embedding of session preferences and nearest neighbors.
  • pgvector (one-table): store embeddings in Postgres with pgvector for a single-table RAG.
  • Vector DB as a service: use a free-tier vector DB if you prefer no ops.

Minimal embedding example (Node):

import fetch from 'node-fetch'

async function embed(text){
  const res = await fetch('https://api.your-llm/embeddings',{ 
    method:'POST',
    headers:{'Authorization':`Bearer ${process.env.OPENAI_API_KEY}`,'Content-Type':'application/json'},
    body: JSON.stringify({model:'text-embedding-3-large',input:text})
  })
  const j = await res.json()
  return j.data[0].embedding
}

Keep embeddings small to control cost: compress restaurant metadata to a single-sentence description.

No-ops and low-maintenance deployment strategies

Micro apps succeed when they stay low-touch. Here are no-ops recommendations:

  • Serverless / edge functions: host your backend as a Function on Vercel, Cloudflare Pages, Fly, or Render; reduces servers to manage.
  • Managed DB: use Supabase or Neon for Postgres with pgvector if you need persistence.
  • Secrets management: store API keys in platform secrets, rotate periodically.
  • Autoscaling considerations: micro apps rarely need heavy scaling — prefer pay-as-you-go platforms with free tiers.
  • Observability: plug in a simple error logger (Sentry or a free-tier alternative) and use health checks to avoid surprises. See work on observability for small teams.

Security, privacy, and license checks

Micro apps often process personal preferences. Keep these rules in 2026:

  • Only send minimal context to LLMs. Avoid including PII unless required and consented.
  • Mask or anonymize identifiers before sending to third-party LLMs.
  • Check LLM provider policies for data use and retention. For sensitive use-cases, prefer private endpoints or on-device LLMs (see on-device reviews).
  • License your template clearly — MIT is typical for starter kits. Add README with security recommendations.

Testing and acceptance criteria

Keep your tests lightweight and focused:

  • Unit test the recommendation output parser (ensure JSON schema compliance).
  • Integration test the API flow: vote => recommend returns a 200 and parseable JSON.
  • UX test: verify that voting UI updates and that recommendations display within 2–3s for good UX.

Case study: porting where2eat to this template

Rebecca Yu built Where2Eat as a week-long project. Recreating that approach with this template yields concrete benefits:

  • Day 1–3: You have the same working MVP, with the same social voting loop.
  • Day 4: Adding LLM prompts refines recommendations from heuristic to conversational explanations.
  • Day 5–7: Adding a tiny RAG layer gives personalized suggestions that remember previous sessions.

Because the template emphasizes low ops and modularity, you can evolve the micro app into a personal tool (TestFlight-style beta for friends) or a small shared utility without heavy engineering overhead.

Advanced strategies and 2026 predictions

Expect these trends to accelerate micro app capabilities:

  • On-device LLMs: more capable models running locally for privacy-sensitive micro apps.
  • Better client-side embeddings: embeddings in the browser for cold-start personalization without server calls.
  • Composable AI primitives: shareable prompt libraries and function specs will let micro apps integrate with multiple LLM providers easily.
  • Function-calling as the default: structured outputs via function calling will simplify parsing and enable lightweight orchestration.

Practical takeaways — what to do now

  • Pick the backend you know best: Flask for Python shops, Express for Node shops. Keep it minimal and swap to serverless later.
  • Use Vite + React for a fast front-end loop; avoid heavy UI libraries in the first week.
  • Start with in-memory persistence; move to Postgres/pgvector only if you need history and RAG.
  • Build clear, structured prompts expecting JSON output to simplify parsing and testing.
  • Keep ops low: prefer managed platforms and small vector stores to avoid maintenance. See operational perspectives on cloud cost & edge shifts.

Appendix: quick checklist for your 7-day micro dining app

  • Day 1: data model doc, sample data, README outline
  • Day 2: endpoints: /api/restaurants, /api/vote, /api/recommend
  • Day 3: frontend listing and vote UI
  • Day 4: LLM integration and prompt design
  • Day 5: optional RAG with embeddings
  • Day 6: security review, secret management, CORS
  • Day 7: deployment, README, share with friends

Final notes: reuse, fork, and iterate

Micro apps are valuable because they are disposable and reusable. Use this template as a starting point for other personal utilities — meeting place recommenders, playlist sharers, or travel snack planners. The key is to keep the surface area small and iterate quickly.

Ready to sprint? Clone or scaffold the template from your preferred stack, follow the day-by-day plan, and ship an MVP in 7 days. If you'd like, fork this approach: swap the LLM provider, plug in a vector DB, or adapt the prompt library to a different domain.

Call to action

Start your 7-day micro app sprint today: pick Flask or Express, scaffold the React frontend, and implement the simple recommendation prompt. Share your fork, issues, and improvements in the project repo so others can reuse and contribute — micro apps scale by recomposition, not by complexity. If you want the runnable repo and a prompted prompt-library, grab the starter pack on GitHub, star it, and open an issue with your use case so I can add patterns that match real-world needs.

Advertisement

Related Topics

#starter-kit#tutorial#AI
c

codenscripts

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-24T03:43:09.498Z