Google's Gemini can now autonomously operate apps on Pixel 10 Pro and Galaxy S26 Ultra, starting with food delivery and rideshare services in beta.
Google has launched Gemini task automation in beta on Pixel 10 Pro and Samsung Galaxy S26 Ultra, allowing the AI to navigate and operate third-party apps like Uber and food delivery services on behalf of users. The feature runs in the background, showing real-time status text of what Gemini is doing. It currently supports a small subset of apps using a 'reasoning' approach — where Gemini visually interprets and interacts with UIs — because most apps lack dedicated APIs or accessibility integrations. Google's Android head Sameer Samat confirmed this reasoning fallback is a stopgap while pushing developers toward deeper integrations.
Gemini's fallback is visual UI reasoning — it literally reads your app's screen when no API or accessibility layer exists. This is a red flag and an opportunity: apps without proper Gemini integration will get slower, less reliable automation, and Google is actively nudging developers to adopt their APIs or accessibility hooks to get first-class support. The architecture split (API > Accessibility > Visual Reasoning) is now a real technical decision with user-experience consequences.
Audit your Android app this week against Google's Gemini app integration docs — if your app lacks an intent API or accessibility service layer, you're already delivering a degraded experience to Gemini users on Pixel 10 Pro and S26 Ultra.
Open the Android Developers site and search 'Gemini app actions integration' — you'll see exactly which integration tier your app currently qualifies for and what's required to upgrade.
Tags
Signals by role
Also today
Tools mentioned