Media AI applications analyze media content such as movies, TV programs, advertisement videos or
user-generated content. The solutions often involve
computer vision. Typical scenarios include the analysis of images using
object recognition or face recognition techniques, or the
analysis of video for scene recognizing scenes, objects or faces. AI-based media analysis can facilitate media search, the creation of descriptive keywords for content, content policy monitoring (such as verifying the suitability of content for a particular TV viewing time),
speech to text for archival or other purposes, and the detection of logos, products or celebrity faces for ad placement. •
Motion interpolation •
Pixel-art scaling algorithms •
Image scaling •
Image restoration •
Photo colorization •
Film restoration and video upscaling • Photo tagging •
Text-to-image models such as
DALL-E,
Midjourney and
Stable Diffusion • Image to video • Text to video such as Make-A-Video from Meta, Imagen video and Phenaki from Google • Text to music with AI models such as MusicLM • Text to speech such as
ElevenLabs and
15.ai •
Motion capture Deep-fakes Deep-fakes can be used for comedic purposes but are better known for
fake news and hoaxes. Deepfakes can portray individuals in harmful or compromising situations, causing significant reputational damage and emotional distress, especially when the content is defamatory or violates personal ethics. While defamation and false light laws offer some recourse, their focus on false statements rather than fabricated images or videos often leaves victims with limited legal protection and a challenging burden of proof. In January 2016, the
Horizon 2020 program financed the InVID Project to help journalists and researchers detect fake documents, made available as browser plugins. In June 2016, the visual computing group of the
Technical University of Munich and from
Stanford University developed Face2Face, a program that animates photographs of faces, mimicking the facial expressions of another person. In September 2018, U.S. Senator
Mark Warner proposed to penalize
social media companies that allow sharing of deep-fake documents on their platforms. In 2018, Darius Afchar and Vincent Nozick found a way to detect faked content by analyzing the mesoscopic properties of video frames.
DARPA gave 68 million dollars to work on deep-fake detection. and AI software capable of detecting deep-fakes and cloning human voices have been developed.
Video surveillance analysis and manipulated media detection AI algorithms have been used to detect deepfake videos.
Video production Artificial intelligence is also starting to be used in video production, with tools and software being developed that utilize generative AI in order to create new video, or alter existing video. Some of the major tools that are being used in these processes currently are DALL-E, Mid-journey, and Runway. Way mark Studios utilized the tools offered by both
DALL-E and
Mid-journey to create a fully AI generated film called
The Frost in the summer of 2023.
Music AI has been used to compose music of various genres.
David Cope created an AI called
Emily Howell that managed to become well known in the field of algorithmic computer music. The algorithm behind Emily Howell is registered as a US patent. In 2012, AI
Iamus created the first complete classical album.
AIVA (Artificial Intelligence Virtual Artist), composes symphonic music, mainly
classical music for
film scores. It achieved a world first by becoming the first virtual composer to be recognized by a musical
professional association.
Melomics creates computer-generated music for stress and pain relief. The Watson Beat uses
reinforcement learning and
deep belief networks to compose music on a simple seed input melody and a select style. The software was open sourced and musicians such as
Taryn Southern collaborated with the project to create music. South Korean singer, Hayeon's, debut song, "Eyes on You" was composed using AI which was supervised by real composers, including NUVO.
Writing and reporting Narrative Science sells
computer-generated news and reports. It summarizes sporting events based on statistical data from the game. It also creates financial reports and real estate analyses.
Automated Insights generates personalized recaps and previews for
Yahoo Sports Fantasy Football.
Yseop, uses AI to turn structured data into natural language comments and recommendations.
Yseop writes financial reports, executive summaries, personalized sales or marketing documents and more in multiple languages, including English, Spanish, French, and German. TALESPIN made up stories similar to the
fables of Aesop. The program started with a set of characters who wanted to achieve certain goals. Mark Riedl and Vadim Bulitko asserted that the essence of storytelling was experience management, or "how to balance the need for a coherent story progression with user agency, which is often at odds". While AI storytelling focuses on story generation (character and plot), story communication also received attention. In 2002, researchers developed an architectural framework for narrative prose generation. They faithfully reproduced text variety and complexity on stories such as
Little Red Riding Hood. In 2016, a Japanese AI co-wrote a short story and almost won a literary prize. South Korean company Hanteo Global uses a journalism bot to write articles. Literary authors are also exploring uses of AI. An example is
David Jhave Johnston's work
ReRites (2017–2019), where the poet created a daily rite of editing the poetic output of a neural network to create a series of performances and publications.
Sports writing In 2010, artificial intelligence used
baseball statistics to automatically generate news articles. This was launched by
The Big Ten Network using software from
Narrative Science. After being unable to cover every
Minor League Baseball game with a large team,
Associated Press collaborated with
Automated Insights in 2016 to create game recaps that were automated by artificial intelligence. UOL in Brazil expanded the use of AI in its writing. Rather than just generating news stories, they programmed the AI to include commonly searched words on
Google.
Wikipedia Millions of its articles have been edited by bots which however are usually not artificial intelligence software. Many AI platforms use Wikipedia data, mainly for training machine learning applications. There is research and development of various artificial intelligence applications for Wikipedia such as for identifying outdated sentences,
detecting covert vandalism or recommending articles and tasks to new editors. Machine translation has also be used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future. A content translation tool allows editors of some Wikipedias to more easily translate articles across several select languages.
Video games In video games, AI is routinely used to generate behavior in
non-player characters (NPCs). In addition, AI is used for
pathfinding. Games with less typical AI include the AI director of
Left 4 Dead (2008) and the neuroevolutionary training of platoons in
Supreme Commander 2 (2010). AI is also used in
Alien Isolation (2014) as a way to control the actions the Alien will perform next. Games have been a major application of AI's capabilities since the 1950s. In the 21st century, AIs have beaten human players in many games, including
chess (
Deep Blue),
Jeopardy! (
Watson),
Go (
AlphaGo),
poker (
Pluribus and
Cepheus),
E-sports (
StarCraft), and
general game playing (
AlphaZero and
MuZero). Kuki AI is a set of
chatbots and other apps which were designed for entertainment and as a marketing tool.
Visual images The first AI art program, called
AARON, was developed by
Harold Cohen in 1968 with the goal of being able to code the act of drawing. It started by creating simple black and white drawings, and later to painting using special brushes and dyes that were chosen by the program itself without mediation from Cohen. AI platforms such as
DALL-E,
Imagen, and
Midjourney have been used for generating visual images from inputs such as text or other images. Some AI tools allow users to input images and output changed versions of that image, such as to display an object or product in different environments. AI image models can also attempt to replicate the specific styles of artists, and can add visual complexity to rough sketches. AI has been used to generate quantitative analysis of existing digital art collections. Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art. While distant viewing includes the analysis of large collections, close reading involves one piece of artwork.
Computer animation In 2023, Netflix of Japan's usage of AI to generate background images for short
The Dog & the Boy was met with backlash online. == Finance ==