Already have an account? Log in
Senior Full-Stack JavaScript Engineer (React/TypeScript) — LLM + Voice Integration (Contract)
Kembangkan asisten AI suara untuk aplikasi slideshow berbasis teks & percakapan
Bangun prototipe asisten AI yang merespons percakapan pengguna dan deskripsi slide. Outputnya adalah aplikasi slideshow interaktif dengan integrasi suara.
Why This Role?
Eksperimen langsung dengan integrasi LLM & suara dalam prototipe fungsional
Required Skills
Keywords
View Original Description from Contra
Original description from Contra
What I want built (in 5–10 hours): I have a "slideshow" in my Base44 app. I want a voice AI assistant that can talk to the user about the current slide. The assistant should only move to the next slide when it explicitly decides to (not automatically). Each slide will have a short text I write describing the content (example: “3 women of varying ages making cookies”) so the AI knows what it’s looking at. What “success” looks like: When the app starts the slide show, it activates the microphone, I speak, and the app hears me. The AI speaks back in context to the conversation and slide content. The AI decides to advance to the next slide once the user seems disinterested. The slide does not advance unless the AI says “next” behind the scenes (not out loud)(as an explicit command). The AI’s response clearly uses the slide’s image description so I know it has context. What I’m not asking for yet: Not perfect mobile polish. Not the AI automatically understanding images (I will provide the image description text). Not a full production system, just a working prototype.
Already have an account? Log in