User lands on platform, uploads a reference photo of themselves for virtual try-on calibration.
User describes their need: occasion, dress code, color/fabric preferences, or a vibe in natural language.
Drag & drop product images onto the try-on canvas. Select items from the results grid.
Bookmark outfits, re-visit before the event. Rate confidence 1–5. Share or purchase directly.
Parses free-text input. Extracts intent signals: formality, color palette, body preference, occasion type.
Maps query intent to Zara product catalogue using vector similarity. Ranks by relevance and style coherence.
Groups matched items into coherent outfits. Ensures visual harmony across top, bottom, and accessories.
Composites user photo + selected garments using generative model. Outputs styled portrait + optional motion clip.
Inditex/Zara's product API. Returns live inventory: SKU, price, sizes, product images, category taxonomy.
Pre-embedded Zara catalogue. Enables fast semantic retrieval without hitting the API for every query.
Stores user photo, profile, saved outfits, confidence ratings, and event history. Enables revisit flows.
High-res Pixia-linked product imagery served for the grid results page and as input to the try-on model.
Responsive grid of matched garments with Pixia images, price, and size. Filterable, sortable, saveable.
Generated image of the user wearing selected outfit. Displayed alongside product links for direct purchase.
3–5 sec video of user walking in outfit with indoor/outdoor lighting variation. Enhances confidence signal.
Saved looks accessible before an event. Drives revisit frequency and repeat usage signals for success metrics.