Apple will train AI models using its street map photos
Apple plans to start using the images it collects to train its AI models. In the disclosure of discovery 9to5macThe company said starting this month, it will use its captured images to provide its environmental capabilities to train certain generated AI models for other purposes.
Looking around is Apple’s answer to Google Street View. The company initially released the feature along with its 2019 Apple Maps revamp. This tool allows users to view locations from the ground. Apple’s blurred faces and license plates were taken in the look around images to protect the privacy of anyone caught in the investigation work.
“In addition to improving algorithms in Apple maps and blurred facial images, Apple will also develop and improve other Apple products and services in a survey conducted in March 2025, starting in March 2025,” the company wrote in the disclosure. “This includes using data to train models that power Apple products and services, including models related to image recognition, creation and enhancement.”
Apple did not immediately respond to more information from Engadget.
The company’s Apple Maps Image Collection Strategy page provides a list of areas and dates planned to collect new images for searching around. One can find Apple’s investigators and vehicles to access their zones by sorting by country and then clicking on a specific zone.
Apple currently provides some different functions that rely on image generation models. For example, the Image Playground allows owners of Apple Intelligence-compatible devices to write prompts to create new images. Photos are also cleaned up in the photos, and you can use them to remove objects from your favorite snapshots.
Google has been using Street View images for many years to train AI models. For example, in 2017, a pair of researchers at the company trained a machine learning model to generate professional photos from datasets collected by street view maps.