Google is making virtual shopping more accessible and realistic with a major update to its AI try on feature. As of December 11, 2025, users in the United States can now virtually try on clothes using nothing more than a selfie. This update removes the need for a full body photo and significantly lowers the barrier to experimenting with digital fashion try ons.
The new capability is powered by Nano Banana, Google’s Gemini 2.5 Flash Image model, which can generate a full body digital representation of a user from a single selfie. This marks an important step forward in Google’s broader push to enhance online shopping with artificial intelligence.
Google’s AI try on feature allows users to visualize how clothing items might look on them before making a purchase. Originally launched in July 2025, the feature lets shoppers preview apparel from Google’s Shopping Graph across Search, Google Shopping, and Google Images.
To use the feature, users tap on an apparel product listing and select the “try it on” option. Google then generates images showing how the selected clothing item would look when worn.
Until now, this process required uploading a full body image. The new update simplifies the experience by allowing users to start with just a selfie.
With the updated feature, users can upload a selfie instead of a full body photo. Using Nano Banana, Google’s Gemini 2.5 Flash Image model, the system generates a full body digital version of the user.
Users are asked to select their usual clothing size. The AI then produces several full body images that reflect different possible poses or renderings. From these options, users can choose one image to serve as their default try on profile.
This approach balances convenience with personalization. A single selfie becomes the foundation for a more complete and flexible virtual try on experience.
Requiring a full body photo created friction for many users. Some shoppers were uncomfortable uploading such images, while others simply did not have a suitable photo available.
By reducing the requirement to a selfie, Google makes the feature more approachable and privacy friendly. Most users already have selfies on their devices, which makes it easier to experiment with virtual try ons on the spot.
This update also helps normalize AI driven fashion previews, making them feel like a natural part of browsing and shopping rather than a specialized tool.
The selfie based try on experience is enabled by Nano Banana, Google’s Gemini 2.5 Flash Image model. This model is designed for fast, high quality image generation while maintaining realistic proportions and textures.
By generating a full body representation from a single image, the model demonstrates how far AI image understanding and synthesis have advanced. It also highlights Google’s strategy of deploying specialized Gemini models for specific tasks such as shopping, visualization, and personalization.
This technology allows Google to scale virtual try ons without requiring users to provide extensive personal imagery.
While the selfie option is now the default and most convenient path, Google continues to offer flexibility. Users who prefer can still upload a full body photo for try ons.
In addition, users can choose from a range of preset models with diverse body types. This option is useful for users who want to explore how clothing might look on different silhouettes or who prefer not to upload personal photos at all.
By offering multiple paths, Google accommodates a wide range of comfort levels and shopping behaviors.
The updated AI try on feature is launching in the United States. It is accessible through Search, Google Shopping, and Google Images when browsing supported apparel listings.
Users simply need to tap on an eligible product and select the try it on option to begin. No additional apps or subscriptions are required.
Google has not yet announced timelines for expanding the feature to other countries, but broader availability is expected over time.
The selfie based try on update is part of a larger investment by Google in AI powered shopping experiences. In addition to integrating try ons into Search and Shopping, Google operates a dedicated app called Doppl.
Doppl focuses on helping users visualize outfits using AI and recently introduced a shoppable discovery feed. This feed presents AI generated videos of real products and suggests outfits based on individual style preferences.
Nearly all items in the feed are shoppable, with direct links to merchants. The experience resembles the short form, visually driven discovery formats popular on social platforms.
Google appears to be positioning AI not just as a utility for confirming purchases, but as a discovery engine that helps users find and experiment with new styles.
The inclusion of AI generated images and videos in shopping experiences may not appeal to everyone. Some users prefer traditional photography or curated editorial content.
However, Google likely sees AI generated visuals as a scalable way to showcase a large number of products in personalized contexts. This approach allows for rapid iteration and customization based on user preferences.
By combining AI try ons with discovery feeds, Google is experimenting with new ways to blend inspiration and commerce.
For shoppers, the updated try on feature reduces uncertainty and increases confidence when buying clothes online. Seeing how an item might look on a personalized model can help reduce returns and improve satisfaction.
For retailers, the feature offers better product presentation without requiring custom photoshoots for every body type. As Google’s Shopping Graph expands, more brands can benefit from richer visual experiences.
This alignment of user convenience and merchant value makes AI try ons an attractive area for continued investment.
It is a virtual shopping tool that lets users see how clothes might look on them using AI generated images.
No. The updated feature allows users to use a selfie to generate a full body digital version for try ons.
The feature uses Nano Banana, part of Google’s Gemini 2.5 Flash Image model family.
Yes. Users can still choose to upload a full body photo or select from preset models.
The selfie based try on feature is launching in the United States.
It is available through Search, Google Shopping, and Google Images on supported apparel listings.
Doppl is Google’s dedicated AI fashion app that helps users visualize outfits and discover shoppable items.
Google’s decision to let users try on clothes with just a selfie represents a meaningful step forward in AI powered shopping. By simplifying the process and leveraging advanced image generation, Google is making virtual try ons more approachable and more personal.
Combined with investments in discovery feeds and dedicated fashion apps, this update shows how Google is reimagining online shopping as an interactive, visually driven experience. As the technology matures and expands globally, AI try ons could become a standard part of how people shop for clothes online.