Building a Mapping Micro App on a Raspberry Pi: Offline Routing with Local Maps
tutorialedgemaps

Building a Mapping Micro App on a Raspberry Pi: Offline Routing with Local Maps

UUnknown
2026-02-10
10 min read
Advertisement

Build a Raspberry Pi 5 dining-finder micro app that runs fully offline with local map tiles, on-device routing, and a tiny ranking model.

Hook: Stop fighting flaky connectivity—build an ultra-light dining-finder that works anywhere

Your team is tired of slow mobile networks, bloated cloud dependencies, and long deploy cycles. You want a small, private app that finds nearby restaurants, ranks them based on group preferences, and runs when there is no internet. In 2026, the Raspberry Pi 5 plus the AI HAT+ 2 make that realistic: local map tiles, local routing, and an on-device ranking model in a compact micro app that fits on an SD card.

Why this matters in 2026 (short version)

Edge AI and micro apps are mainstream. Late 2025 saw a wave of affordable inferencing hardware for the Pi family (AI HAT+ 2), accelerating on-device ML and privacy-first UX. Meanwhile, developers prefer small, maintainable tools that avoid vendor lock-in. This project shows a practical combo: local map tiles, lightweight local routing, and a tiny on-device ranking model to build a dining-finder micro app that runs fully offline on Raspberry Pi 5.

What you'll build (outcome)

  • A self-hosted micro app (Node + Leaflet) that serves map tiles and UI locally
  • Offline routing within a city using OSRM (short extracts) or a lightweight alternative for small areas
  • An on-device ranking model (TFLite/ONNX) running on AI HAT+ 2 to score restaurants by preferences
  • Packaging and tips to keep the app small, fast, and maintainable

Hardware & software checklist

  • Raspberry Pi 5 (2026 model)
  • AI HAT+ 2 (installed late 2025 — used for on-device inferencing)
  • 32–256 GB NVMe or fast SD card (NVMe recommended for heavy routing preprocessing)
  • Power supply, optional USB SSD (for map and routing files) — for field power and compact deployments see our compact power & portable POS guide.
  • Raspberry Pi OS (64-bit) or Ubuntu 22.04/24.04 server
  • Node.js 20+ (for the micro app backend) and Python 3.11+ (for model runtime & preprocessing)

High-level architecture

  1. Frontend: static Leaflet web UI served by Express. Uses local tile server and local routing API.
  2. Tile server: tileserver serving MBTiles prepared from OpenStreetMap extract.
  3. Routing: OSRM or a compact alternative for local extracts; runs on-device and returns simple geometry and travel times.
  4. Ranking model: a tiny ML model (exported to TFLite/ONNX) that scores restaurants using local features and user preferences, executed on AI HAT+ 2.
  5. Data store: lightweight SQLite or JSON files containing restaurant metadata and precomputed indices.

Design choices explained

  • MBTiles: single-file, offline-friendly tile container. Perfect for constrained devices.
  • OSRM for routing: mature and fast for small extracts; precompute on-device for a city to keep RAM use bounded. Alternatives: GraphHopper/Valhalla — heavier; pgrouting — more complex.
  • TFLite/ONNX: small inference runtimes supported by AI HAT+ 2 with hardware acceleration.
  • SQLite: simple, stable, and ideal for micro apps compared to full-blown Postgres.

Step 1 — Prepare the Raspberry Pi 5 and AI HAT+ 2

  1. Flash Raspberry Pi OS (64-bit) or Ubuntu Server and enable SSH. Example with Raspberry Pi Imager.
  2. Update and install essentials:
    sudo apt update && sudo apt upgrade -y
    sudo apt install -y build-essential git curl sqlite3 python3-pip nodejs npm
  3. Follow vendor instructions to attach and enable the AI HAT+ 2. Install drivers and the vendor runtime. On most stacks this includes an accelerated runtime that integrates with TensorFlow Lite or ONNX Runtime. Example (generic):
    curl -sSL https://aihat2.vendor/install.sh | sudo bash
    pip install onnxruntime-light tflite-runtime
    Note: replace vendor installer URL with the official vendor release — AI HAT+ 2 installers appeared in late 2025.

Step 2 — Acquire and prepare local map tiles (MBTiles)

We recommend extracting only the area you need (a city or region) to keep the MBTiles small. Use Geofabrik (OSM extracts) and tippecanoe/mapbox tools to create MBTiles.

  1. Download a small OSM extract from Geofabrik (for a single city).
  2. Use TileServer GL or tileserver-gl-light on the Pi to serve MBTiles. Example: install tileserver-gl (Node):
    npm install -g tileserver-gl
    # copy your my-tiles.mbtiles into /home/pi/tiles
    tileserver-gl /home/pi/tiles/my-tiles.mbtiles --port 8080
  3. If your MBTiles are vector tiles, Leaflet with the maplibre-gl plugin or MapLibre directly is preferred. For raster tiles, native Leaflet tile layers work fine.

Creating MBTiles with tippecanoe (example)

# on a more powerful machine (faster), create vector tiles from GeoJSON/PBF
tippecanoe -o city.mbtiles -zg --drop-densest-as-needed city.geojson

Copy the MBTiles to the Pi (scp or rsync). Keep zoom range limited (e.g., 12–16) for small size.

Step 3 — Set up local routing (OSRM) for a small area

OSRM gives fast routing responses but needs preprocessing. For a single city extract it’s practical on the Pi 5.

  1. Install OSRM backend (compile from source or use prebuilt). Example install steps (summary):
    sudo apt install -y cmake g++ libboost-all-dev libstxxl-dev libprotobuf-dev protobuf-compiler liblua5.2-dev libbz2-dev libzip-dev
    git clone https://github.com/Project-OSRM/osrm-backend.git
    cd osrm-backend
    mkdir -p build && cd build
    cmake ..
    cmake --build . -j4
    sudo cmake --install .
  2. Get a small PBF extract for your area and preprocess:
    osrm-extract -p ../profiles/car.lua city-latest.osm.pbf
    osrm-contract city-latest.osrm
    # run
    osrm-routed --algorithm mld city-latest.osrm --port 5000
    Notes: use the MLD profile for lower memory at runtime. Preprocessing time and disk use vary by area.
  3. Test a route with curl:
    curl 'http://localhost:5000/route/v1/driving/-122.42,37.78;-122.45,37.91?overview=false'
    

Memory & space tips

  • Limit extract to the city polygon. Smaller extracts reduce RAM and disk requirements dramatically.
  • Run preprocessing on a desktop if the Pi is underpowered, then copy the .osrm files to the Pi.
  • Use MLD profiles and reduce the number of annotations to save memory.

Step 4 — Build the micro app backend and frontend

The backend will serve static UI, expose the local tile endpoint, proxy OSRM routing, and run ranking inference when asked. Keep the stack small: Node (Express) for HTTP and a tiny frontend with MapLibre/Leaflet.

Backend (Express) — minimal server.js

const express = require('express');
const fetch = require('node-fetch');
const sqlite3 = require('sqlite3');
const {exec} = require('child_process');
const app = express();
const db = new sqlite3.Database('./restaurants.db');

app.use(express.static('public'));

// Proxy tiles to local tileserver
app.get('/tiles/:z/:x/:y.png', (req, res)=>{
  const url = `http://localhost:8080/data/tiles/{your-layer}/$${req.params.z}/$${req.params.x}/$${req.params.y}.pbf`;
  req.pipe(fetch(url)).pipe(res);
});

// Route via OSRM
app.get('/route', async (req, res)=>{
  const {src, dst} = req.query; // src=-122.42,37.78&dst=-122.45,37.91
  const r = await fetch(`http://localhost:5000/route/v1/driving/${src};${dst}?overview=false`);
  const json = await r.json();
  res.json(json);
});

// Ranking endpoint: calls a Python script that runs the TFLite/ONNX model
app.get('/rank', (req, res)=>{
  const prefs = JSON.stringify(req.query);
  exec(`python3 rank_model.py '${prefs}'`, (err, stdout)=>{
    if (err) return res.status(500).send(err.toString());
    res.json(JSON.parse(stdout));
  });
});

app.listen(3000, ()=> console.log('App on :3000'));

Frontend (public/index.html)

Use MapLibre GL or Leaflet to display the map, query nearby restaurants from SQLite via a small REST endpoint, show routes by calling /route, and ask /rank to rank visible choices. Keep the UI minimal — a map, a filter panel, and a ‘Recommend’ button.

Step 5 — On-device ranking model

We recommend a compact model with a few dozen features — distance, cuisine match, rating, estimated travel time, group preference vectors. Train on your laptop and export to TFLite or ONNX for on-device inference on AI HAT+ 2.

Training (brief)

  1. Collect sample data: known preferences, restaurant attributes, and chosen restaurants.
  2. Train a simple LightGBM or small dense NN that outputs a score. Keep model tiny (<=200 KB–5 MB) to keep inferencing fast.
  3. Export to ONNX or TFLite. Example (TensorFlow):
    # export to TFLite
    converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    tflite_model = converter.convert()
    open('model.tflite','wb').write(tflite_model)
    

Runtime (Python using tflite-runtime)

import sys, json
import sqlite3
import numpy as np
import tflite_runtime.interpreter as tflite

prefs = json.loads(sys.argv[1])
# load restaurants from SQLite
conn = sqlite3.connect('restaurants.db')
rows = conn.execute('SELECT id,lat,lon,cuisine,rating FROM restaurants').fetchall()

interpreter = tflite.Interpreter(model_path='model.tflite')
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

results = []
for r in rows:
    features = compute_features(r, prefs)  # normalize and pack into numpy array
    interpreter.set_tensor(input_details[0]['index'], np.array([features], dtype=np.float32))
    interpreter.invoke()
    score = interpreter.get_tensor(output_details[0]['index'])[0][0]
    results.append({'id': r[0], 'score': float(score)})

# return top results
print(json.dumps(sorted(results, key=lambda x:-x['score'])[:10]))

Use the vendor SDK to get HW acceleration via the AI HAT+ 2 if supported by the runtime (ONNX Runtime or TFLite delegates). For guidance on deploying compact field kits and reliable power, see our field reviews on portable power and pop-up kits and recommendations for powering tech-heavy builds at How to Power a Tech-Heavy Shed.

Step 6 — Connect everything and UX notes

  • When the user opens the app, the frontend queries the local DB for nearby restaurants (spatial query via bounding box). Show pins and allow selection.
  • Pressing ‘Recommend’ sends the current view + user preference vector to /rank; backend runs the model and returns ranked IDs.
  • Selecting a restaurant requests /route to show walking/driving time and polylines from the current location (or pinned start).
  • Cache common responses and precompute cluster tiles when possible to keep the UI snappy.

Optimization & production tips

  • Keep extracts small: only city-level OSM extracts drastically reduce OSRM and MBTiles size.
  • Quantize models: use int8 or float16 to shrink model size and improve latency on the AI HAT+ 2.
  • Precompute frequently used routes: if you have common start points (e.g., office building), precompute and cache.
  • Use SQLite FTS or R-tree: enable quick spatial and text search for cuisine and names.
  • Monitor resource use: track CPU, memory, and I/O. Pi 5 is capable but not infinite. For lightweight field kit reviews and hardware picks, see our Field Toolkit Review: Running Profitable Micro Pop‑Ups in 2026.

Troubleshooting common problems

  • OSRM crashes on low RAM: re-run preprocessing on a stronger machine and copy files over, or use smaller extracts.
  • Tiles not loading: check tileserver logs and ensure CORS and correct tile paths for MapLibre/Leaflet.
  • Model slow: enable hardware delegates for TFLite or use ONNX Runtime with the vendor EP for AI HAT+ 2.
  • Disk I/O bottleneck: use an NVMe SSD via USB-C for heavy tile/routing files; see power and field kit notes at Pop-Up Power — Compact Solar & Portable POS.

Recommendation: Start small (a single neighborhood). Get a working end-to-end loop before expanding the spatial extent — you’ll save time and storage.

Real-world considerations: privacy, updates, and maintenance

Because everything runs locally, user data and preferences stay private — a major benefit in 2026 privacy-conscious deployments. For updates you can:

  • Ship periodic MBTiles and routing updates via USB or local network sync
  • Provide a compact delta mechanism (replace only changed tiles)
  • Allow manual re-training and model deployment via secure transfer (signed model packages)

By 2026, three trends make this pattern compelling:

  • Edge AI acceleration (AI HAT+ 2 and similar devices) makes real-time inferencing possible on low-cost hardware.
  • Micro apps and vibe-coding have matured — small, single-purpose apps are a productivity pattern for teams and individuals alike.
  • Open data and tooling (OSM, MBTiles, MapLibre, OSRM) have active ecosystems, enabling offline-first experiences without vendor lock-in.

Advanced strategies & future enhancements

  • Federated preference learning: run local fine-tuning of the ranking model on-device, then optionally upload aggregated deltas (privacy-preserving) to improve global models.
  • Hybrid routing: use simple heuristics when memory is constrained and fall back to OSRM for complex requests.
  • Vector tile styling: precompute styles to reduce client-side rendering load and speed up MapLibre on low-power GPUs.
  • Voice & multimodal: with AI HAT+ 2’s inferencing, add local voice queries and short LLM prompts for better suggestions (keep models small). See ideas for launching local audio experiences in Launch a Local Podcast.

Actionable takeaways

  • Start with a small geographic extract to make routing and MBTiles manageable on Pi 5.
  • Train a tiny ranking model on a laptop and export to TFLite/ONNX; use AI HAT+ 2 delegates for fast on-device scoring.
  • Use Node + tileserver + OSRM to glue tiles, routing, and ranking into an offline-first micro app.
  • Iterate: validate the UI and workflow with real users before scaling coverage.

Wrap-up and next steps

Building a dining-finder micro app that runs fully offline on Raspberry Pi 5 is realistic in 2026. The key is modularity: local tiles (MBTiles), a bounded routing extract (OSRM), and a tiny on-device ranking model (TFLite/ONNX) accelerated by AI HAT+ 2. Start with a single neighborhood, optimize model size and tile zoom levels, and you’ll have a fast, private, and maintainable micro app that solves a real pain point: reliable decisions when connectivity fails.

Call to action

Ready to try it? Clone our starter repo (includes a minimal Express server, a sample MBTiles, and an example TFLite ranker) from the Deploy Website GitHub and follow the step-by-step README. If you want help sizing extracts or quantizing models for your area, reach out — we’ll help you ship a lightweight, offline-first micro app that runs on your Pi in a day. For practical field kit recommendations see Field Toolkit Review: Running Profitable Micro Pop‑Ups in 2026 and our Pop-Up Power — Compact Solar & Portable POS guide.

Advertisement

Related Topics

#tutorial#edge#maps
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-17T18:01:19.445Z