Google IO / Practical AI Stuff for Developers


Post by Michael Rosario of InnovativeTeams.NET

Day 2 of Google I/O was jam-packed with announcements for developers of all stripes. From the future of Android development to powerful new web features and exciting advancements in AI at the edge. Here are a quick rundown of the practical highlights:

WebAI: Pushing Boundaries

  • Expensive Work on the Client: The future of WebAI is about offloading heavy processing to the client. Front-end developers can potentially leverage their JavasCript skills to leverage the Google Gemma models for various NLP tasks like content tagging, data extraction or summarization. Learn more about this pattern using the following link:

  • LLM Inference guide for Web | Edge | Google for Developers

  • LLM Inference API: Explore a range of pre-trained models for various tasks, including Gemma 2b, Phi 2, Falcon RW 1B, and Stable LM 3b. You can even fine-tune these models for your specific needs! To find examples of community examples, please check out the hashtag #webai on YouTube or your favorite social platform.

  • Visual Blocks Framework: This powerful tool, co-developed with Hugging Face, lets you leverage pre-built blocks for tasks like image segmentation, translation, and text classification. You can even get creative and build custom blocks using web components! (#visualblocks) Explore the collection on goo.gle/hf-visualblocks and learn more about custom block creation on goog.gle/instructpipe.

  • Model Explorer & Explainable AI: Gain deeper insights into model behavior with the Model Explorer and delve into the world of explainable AI with resources on web.dev/explore/ai. Internal to Google, this tool helps model builders explore the structure of models.

Angular Gets a Signal Boosts

  • New Control Syntax (v17): Get ready to streamline your code with the new control syntax in Angular 17.
  • Signals - A Stage 1 Proposal: Check out the stage 1 proposal for Signals, a new approach to data communication as an open web standard that aims to simplify code and potentially reduce reliance on RxJS. In a future state, you might find signals travel into other web frameworks besides Angular. Read more on GitHub. The team is also working on interoperability between observables and signals.
  • Helpful blogs talking about input and output signal concepts
  • Signals • Overview • Angular
  • https://dev.to/this-is-angular/whats-new-in-angular-173-1148
  • https://dev.to/oz/angular-inputs-and-single-source-of-truth-4kog
  • Analog.js is a meta-framework for building websites with Angular. The framework provides file based routing, server side data fetching, using markdown for content routes and server side rendering. During GoogleIO, they showed a component template format that looked a lot like a Vue composition API.
  • Keeping Angular lean: In recent development of Angular, the team has worked hard to reduce the concept count and learning curve related to the Angular framework. From the Enterprise perspective, developers can opt into these new features and syntax. I love the Angular documentation “reboot” at https://angular.dev/overview with robust working samples in the browser.

Android Gets Many Boosts

  • AI Copilot with Gemini: Buckle up for smarter coding with Gemini Copilot, an AI-powered assistant that’s coming to Android Studio. It’s great to see Microsoft, Github, and Google in friendly competition to build code copilot tools to help developers feel more productive. The Google Gemini Copilot tools offer an AI code assistant chat experience. From your code window, Gemini can provide AI-enabled code completion and time saving AI refactorings.
  • Screen designs to Android Compose with Ease: This is huge! Directly translate your screen designs into Android Compose layouts. This will become a major time-saver and help teams to iterate faster from idea to implementation.
  • Get ready for a more powerful Pixel experience! Later this year, Pixel phones will leverage Gemini Nano, an on-device foundation model, to understand not just text, but also sights, sounds, and spoken language. This opens up a world of possibilities for richer and more intuitive interactions. In contrast to the “online” Gemini variants, Google optimized Gemini Nano for providing fast “on device” responses with or without a data connection.

More practical workshops and code labs

This is just a taste of the exciting announcements from Google I/O Day 2. For a deeper dive, be sure to check out the official resources and explore the many new tools and features that will be shaping the future of development.