FoodFinder feature: AI-powered food identification for carb entry#1
Closed
taylorpatterson-T1D wants to merge 58 commits intomainfrom
Closed
FoodFinder feature: AI-powered food identification for carb entry#1taylorpatterson-T1D wants to merge 58 commits intomainfrom
taylorpatterson-T1D wants to merge 58 commits intomainfrom
Conversation
Updated translations from Lokalise on Sun Aug 24 12:32:21 PDT 2025
Updated translations from Lokalise on Sat Aug 30 10:22:12 PDT 2025
Bolus view fixes, and updates for iOS26
add missing localization strings for Favorite Foods, add comments to Favorite Foods string that were already localized, remove some items that do not require localization
Convert to String Catalogs
Disable Liquid Glass
Clear bolus recommendation on initial edit
Support audio for pump managers that use silent audio for keep-alive
Updated translations from lokalise on Tue Sep 23 15:51:19 PDT 2025
Updated translations from lokalise on Fri Oct 24 11:10:09 PDT 2025
Updated translations from lokalise on Wed Nov 19 09:07:32 PST 2025
Update linting for Live Activity
Updated translations from lokalise on Sat Dec 27 14:50:21 PST 2025
Updated translations from lokalise on Sun Feb 1 09:46:29 PST 2026
* improve large font display; * add section headings; * use short labels for display, long labels for description * Enable autoscaling in Live Activity widget to limit truncation * bug fix for plot using glucose color; author: bastiaanv
FoodFinder adds barcode scanning, AI camera analysis, voice search, and text-based food lookup to Loop's carb entry workflow. All feature code lives in dedicated FoodFinder/ subdirectories with FoodFinder_ prefixed filenames for clean isolation and portability to other Loop forks. Integration touchpoints: ~29 lines across 3 existing files (CarbEntryView, SettingsView, FavoriteFoodDetailView). Feature is controlled by a single toggle in FoodFinder_FeatureFlags.swift. New files: 34 (11 views, 3 models, 13 services, 2 view models, 1 feature flags, 1 documentation, 3 tests)
Voice search (microphone button) now uses the AI analysis pipeline instead of USDA text search, enabling natural language food descriptions like "a medium bowl of spicy ramen and a side of gyoza". Text-typed searches continue using USDA/OpenFoodFacts as before. Changes: - SearchBar: Add mic button with voice search callback - SearchRouter: Add analyzeFoodByDescription() routing through AI providers - SearchViewModel: Add performVoiceSearch() async method - EntryPoint: Wire VoiceSearchView sheet to AI analysis pipeline
Replace the separate mic button with automatic natural language detection. When the user dictates into the search field via iOS keyboard dictation, the text is analyzed: short queries (1-3 words like "apple") use USDA, while longer descriptive phrases (4+ words like "a medium bowl of spicy ramen and a side of gyoza") automatically route to the AI analysis path. Changes: - SearchBar: Remove mic button and onVoiceSearchTapped parameter - SearchViewModel: Add isNaturalLanguageQuery() heuristic, route detected natural language through performVoiceSearch in performFoodSearch - EntryPoint: Remove voice search sheet, wire onGenerativeSearchResult callback to handleAIFoodAnalysis
The Python script created group definitions but didn't properly attach all of them to their parent groups. Fixes: - Services group → now child of Loop app root (was orphaned) - Resources group → now child of Loop app root (was orphaned) - Documentation group → now child of project root (was orphaned) - ViewModels/FoodFinder → moved from Loop root to View Models group - Tests/FoodFinder → moved from project root to LoopTests group
…, analysis history - Fix triple barcode fire by consuming scan result immediately in Combine sink - Replace AsyncImage with pre-downloaded thumbnail to avoid SwiftUI rebuild issues - Use smallest OFF thumbnail (100px) with static food icon fallback for slow servers - Add secure Keychain storage for AI provider API keys - Add analysis history tracking with FoodFinder_AnalysisRecord - Consolidate AI provider settings and remove BYOTestConfig
- Remove barcode connectivity pre-check that added 3+ seconds latency per scan - Add NSCache to ImageDownloader for thumbnail deduplication (50 items, 10MB) - Remove artificial minimumSearchDuration delay from search and error paths - Merge duplicate Combine observers into single combineLatest for AI recomputation - Decode image_thumb_url from OpenFoodFacts API for smallest available thumbnail - Wrap 369 bare print() calls in #if DEBUG across 8 FoodFinder files
…eaders File consolidations (6 files removed, 2 new files created): 1. FoodFinder_ScanResult.swift + FoodFinder_VoiceResult.swift → FoodFinder_InputResults.swift 2. FoodFinder_FavoriteDetailView.swift + FoodFinder_FavoriteEditView.swift + FoodFinder_FavoritesView.swift → FoodFinder_FavoritesHelpers.swift 3. FoodFinder_AISettingsManager.swift → absorbed into FoodFinder_AIProviderConfig.swift 4. FoodFinder_FavoritesViewModel.swift → absorbed into FoodFinder_SearchViewModel.swift Other changes: - Fix long analysis titles overflowing the screen by programmatically truncating picker row names and constraining food type to 20 chars - Improve AI prompts for menu/recipe/text image analysis - Add text-only AI analysis path in AIServiceManager - Increase AI token budget for multi-item responses - Standardize all 26 FoodFinder file headers with consistent format
- Add originalAICarbs and aiConfidencePercent fields to FoodFinder_AnalysisRecord for tracking AI estimate accuracy - Add Notification.Name.foodFinderMealLogged for real-time meal event observation - Add MealDataProvider protocol with date-range query interface and AnalysisHistoryStore conformance - Add "Last 30 days" retention option to Analysis History settings
406d97e to
f21361a
Compare
- Add originalAICarbs and aiConfidencePercent fields to FoodFinder_AnalysisRecord for tracking AI estimate accuracy - Add Notification.Name.foodFinderMealLogged for real-time meal event observation - Add MealDataProvider protocol with date-range query interface and AnalysisHistoryStore conformance - Add "Last 30 days" retention option to Analysis History settings
- Absorption time model: conservative adjustments anchored to Loop's 3-hour default. FPU adds +0/+0.5/+1.0 hr (was +1/+2.5/+4), fiber +0/+0.25/+0.5 (was +0/+1/+2), meal size +0/+0.25/+0.5 (was +0/+1/+2). Cap reduced from 8 to 5 hours. Updated AI prompt and 3 examples. - OCR routing fix: raised menu detection threshold from 1 to 5 significant lines and always include image on menu path to prevent food photo misclassification (fixes "Unidentifiable Food Item" on food photos). - Inline "Why X hrs?" pill on Absorption Time row replaces standalone DisclosureGroup row. Purple centered pill with fixed width, expands reasoning on tap. Uses AIAbsorptionTimePickerRow when AI-generated. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR replaces the legacy FoodFinder PR #2329
The Problem We're Solving:
Carb counting is the single hardest daily task for people managing diabetes with Loop. Every meal requires estimating carbohydrate content — and getting it wrong directly impacts Time in Range. Current workflow: the user mentally estimates carbs, types a number, and hopes for the best. There's no assistance, no database lookup, no learning from past meals.
What FoodFinder Does
FoodFinder adds AI-powered food identification directly into Loop's existing Add Carb Entry screen. It provides four ways to identify food and auto-populate carb values:
Text Search
Type a food name (e.g., "banana" or "chicken") into the search bar. FoodFinder queries the USDA Food Data open-source food database and returns matching products with nutrition data. Select a result and carbs, fat, protein, fiber, and calories auto-populate. Serving size adjustments are built in. This is for simple single-word food items like fruits and vegetables.
Voice / Dictation Search
Speak or dictate a food description. FoodFinder detects dictation input automatically using the iPhone's mic option and routes it through AI generative search. Say "I am eating a turkey sandwich with cheese and a side of chips" and the AI analyzes the full meal description, estimates portions, and returns structured nutrition data for you to confirm.
AI Image Analysis
Tap the camera button, take a photo of your meal, and the AI analyzes visible portions using scale references (plate size, utensil dimensions, known object sizes). Returns per-item nutrition breakdown with confidence scoring, USDA-referenced serving sizes, and recommended changes to the absorption time. Supports multi-item plates — each detected food item is listed individually and can be excluded if the user plans to skip an item from the meal.
Menu & Recipe Analysis (with Translation)
Point the camera at a restaurant menu, recipe card, or text listing food items in any language. FoodFinder runs on-device OCR first — if text is detected, it routes through a specialized text analysis path that transcribes, translates, and estimates nutrition using USDA standard serving sizes. Has been tested to work with menus in Spanish, Portuguese, Russian, German, French, etc. Has not been tested with Hanzi, Kanji or Hanja (CJK) symbol languages yet.
User-Configurable Settings
All settings are in Loop Settings → FoodFinder:
gpt-4o,claude-sonnet-4-5-20250929,gemini-2.0-flash) (Be sure you use models supporting image processing)Safety Considerations
Impact on Existing Loop Code
FoodFinder was designed for minimal integration footprint and easy containment within Loop:
FoodFinder_, in dedicated subdirectories)Modified existing files:
CarbEntryView.swiftFoodFinder_EntryPoint(~5 lines) + analysis history pickerSettingsView.swiftCarbEntryViewModel.swiftFavoriteFoodDetailView.swiftFavoriteFoodsView.swiftAddEditFavoriteFoodView.swiftAddEditFavoriteFoodViewModel.swiftNew file locations (all under
Loop/):Models/FoodFinder/View Models/FoodFinder/Views/FoodFinder/Services/FoodFinder/Resources/FoodFinder/Documentation/FoodFinder/LoopTests/FoodFinder/Screenshots
Video Demo
YouTube Demo: https://youtu.be/i8xToAYBe4M
Test Plan
For reviewers and field testers:
Asking @marionbarker for review upon availability.