Google has launched its "Gemini era," a period of widespread adoption and rebranding of its AI technology across various products and services. Gemini is the name given to Google's current generation family of multimodal AI models, which can understand and generate text like regular large language models but also natively understand images, audio, videos, and code. The core family of models uses a transformer architecture and relies on strategies like pretraining and fine-tuning. Google has confirmed that Gemini models are trained on multiple modalities from the beginning, allowing for more intuitive understanding. The company claims that Gemini can "seamlessly understand and reason about all kinds of inputs from the ground up." Google offers various Gemini models with different sizes and capabilities, including Gemini 1.0 Ultra, Gemini 1.5 Pro, Gemini 1.5 Flash, and Gemini 1.0 Nano, each designed for specific tasks and devices. The company is integrating Gemini into its applications, services, and products, including Gmail, Google Docs, Google Search, Android, Chrome, YouTube, and more. Developers can access Gemini through the Gemini API in Google AI Studio or Google Cloud Vertex AI to build AI-powered apps and integrate AI into their products. With Gemini, Google aims to provide a powerful tool for developers to fine-tune models on their own data and automate workflows across various applications.