The new Google search generative experience: Here’s what it looks like

It’s here – the all-new AI-powered Google Search engine we’ve heard rumors about under the code name Magi.

Is it more “visual, snackable, personal, and human?” Yes.

But, for now, you can only gain access to Google’s new search generative experience (SGE) through a Google Labs waitlist – which means you may be waiting weeks before you can play with it directly.

Don’t worry. We’ve got your first look and a deep dive into the new Google Search generative AI experience.

What the new Google search experience looks like

The interface. The new Google search experience may display an AI-generated answer above the search results listings. Google clearly labels the answer as Generative AI is experimental, which is then followed by an answer to your query.

The answer is boxed in. Google cites the websites it used to generate the answer. Those sites can be clicked on to dig deeper. Or you can follow up with an additional question or even click the toggle button at the top right to dive deeper.

“You’ll see an AI-powered snapshot of key information to consider, with links to dig deeper,” Google said.

When you click the expand button to toggle to show a deeper response, you are given additional responses from the generative AI.

Here is a GIF of it in action:

Throughout the answers generated by AI, Google gives you websites in these clickable boxes with images, so you can click over to the website to learn more.

The color of the generative AI answer box will change to “reflect specific journey types and the query intent itself,” Google said.

Vertical search with AI. This also works for vertical search experiences, such as Google Shopping results. Google’s SGE can pull in 35 billion product listings from the Google Shopping Graph, which has 1.8 billion updates every hour, Google told us. The generative AI needs to update fast, almost in real-time, to provide some answers.

Google can give you a good answer for which products to consider when searching for specific types of products, such as [bluetooth speaker for a pool party]:

Conversations. You can also follow up with your query by adding more details or additional prompts to the Ask a follow up box. Google will then generate a follow-up answer.

“Context will be carried over from question to question, to help you more naturally continue your exploration. You’ll also find helpful jumping-off points to web content and a range of perspectives that you can dig into,” Google explained.

Conversational mode is especially useful for follow-up questions, as well as more complex or evolving information journeys, Google explained.

“It uses AI to understand when a person is searching for something that is related to a previous question. It carries over context from previous questions to reformulate the query to better reflect the intent,” Google added.

How this works

Technology. Google said this new search experience uses a “variety of LLMs,” including but not limited to MUM and PaLM2.

This search experience was “purposefully trained to carry out tasks specific to Search, including identifying high-quality web results that corroborate the information presented in the output,” Google said.

Where Google won’t give answers. Google won’t give you answers for everything you might ask it, Liz Reid, VP of Search at Google, told us. Google is trying to be careful with this new version of Google Search, which will show answers for safer queries.

For example, Google won’t show an answer to a question about giving a child Tylenol because it is in the medical space. Google may also not show answers to questions in the financial space.

Sound familiar? Yes, Google is playing it safe in YMYL (Your Money, Your Life) categories. Google is expanding YMYL to include civic information.

“Just as our ranking systems are designed not to unexpectedly shock or offend people with potentially harmful, hateful, or explicit content, SGE is designed not to show such content in its responses,” Google explained.

Google added that they hold this new search experience “to an even higher standard when it comes to generating responses about certain queries where information quality is critically important.”

This new search experience “places even more emphasis on producing informative responses that are corroborated by reliable sources,” Google told us.

When it comes to “data voids” or “information gaps” – where Google’s systems have lower confidence in its responses, Google “aims to not generate an AI-powered snapshot,” they said.

Plus, for explicit or dangerous topics, Google will stay away from generating a response.

Fluid vs. factual. People trust answers that are given in a more fluid style, Reid told us. But Google prefers to give answers that are more factual than fluid, because people are more likely to trust a fluid response.

Hallucinations are a big issue in generative AI – and Google said it is very sensitive about not giving false or inaccurate information, especially on YMYL topics (e.g., health, finance).

“Given the trust people put in Search, we were intentional in constraining conversationality. What this means, for example, is that people might not find conversational mode in SGE to be a free-flowing creative brainstorm partner – and instead find it to be more factual with pointers to relevant resources,” Google added.

Google’s approach. Google has a five-point approach to generative AI in search:

Information needs: How can Google reduce the number of steps it takes for the searcher to accomplish a task or complete a goal and how can Google make the experience more fluid and seamless?

Information quality: The information Google responds with needs to be quality, and the way the AI responds needs to be high level. So should Google answer health or financial-related queries?

Safety constraints: Should Google provide first-person responses? Should Google provide fluid answers that users would trust to be 100% accurate, when Google might not be able to verify the accuracy of all the answers?

Ecosystem: Google wants to provide traffic and credits to the source of the content. Google wants to design an experience that encourages the users and searcher to dig deeper into those sources.

Ads: Can ads be relevant and provide additional information to the user and how is it best to show the ads to the user in this experience.

Citations and links

When Google launched Bard, we were all taken aback by the lack of citations and links to publishers. It was rare to see links from Google Bard to publisher websites.

However, in Google’s search generative experience, we see a healthier way of linking to publishers and supporting the ecosystem.

Not only are the explicit answers generated in this search experience made up of specific websites, but those websites that make up those answers are also prominently displayed in the answer with a thumbnail image, title, and URL, all that is clickable to the publisher’s website.

Google, however, will not directly cite or attribute a particular page. Google’s AI model synthesizes information from a variety of sources.

In fact, Google looks for factual corroboration across sources to build the answers and then show the citations. These are generally from high-quality online sources. Google is using many of the signals Google has had in place for decades to understand information quality.

Links to publisher sites. Here is a screenshot showing those websites in the answer:

Toggle deeper. You can then click at the top right, on that toggle button to do a deeper dive into more sources, where the generative AI shows more answers with more sources that you can click on. The arrow in this image is pointing to the toggle, directly above the website links:

With search results below. Plus, you can continue to scroll down and access classic search results, in a more “snackable” format. You can see some of the links to search results, in a more boxed-in format here:

More details

Google also spoke about its AI principles and emphasized they take all these AI technologies seriously.

“We’re taking a responsible and deliberate approach to bringing new generative AI capabilities to Search,” Google said.

This is not Bard, Bard was designed to showcase what the LLM models can do. This experience is specifically designed for search and works differently, as showcased above.

Google has deployed its search quality raters to do some early testing over the next few weeks before launching it to the first set of public users. Search quality raters will provide feedback both in this pre-release phase and ongoing to help improve the overall results and experience of this new approach to search.

“These ratings do not directly impact SGE’s output, but are used to train the LLMs and improve the experience overall,” Google said.

You will be able to signup for the waitlist today with the first wave of approvals to try out this new search experience in the coming weeks, more on that below.

For more on this topic, see our companion shorter stories:

How to sign up for the new Google Search generative experience

Ads on the new Google Search generative experience

New Google perspectives, about this image and AI generated image labels

Why we care

Google’s new search generative experience is a much different search experience from what we’ve seen from Google before. But at the same time, it still feels very much like Google search. Google is also doing a much better job at linking to publishers, with the goal of driving traffic to websites, unlike Bard.

This experience isn’t replacing the Google search you know today – when that might happen is anyone’s guess right now. We expect Google will listen to feedback and adjust these features before fully launching it as the main search experience on Google.com.

The post The new Google search generative experience: Here’s what it looks like appeared first on Search Engine Land.