Intelligent Avatar Platforms (IAP) Intelligent Avatar Platform (IAP) can be defined as an online platform supported by
artificial intelligence that allows one to create a
clone of themselves. Essentially, the platforms are marketed as a place where one 'lives eternally', as they are able to interact with other avatars on the same platform. IAP is becoming a platform for one to attain
digital immortality, along with maintaining a family tree and legacy for generations following to see. Additionally, to ensure that the clone is as close to the original person, companies also encourage interacting with their own clone by chatting and answering questions for them. This allows the algorithm to learn the
cognition of the original person and apply that to the clone. Intellitar closed down in 2012 because of intellectual property battles over the technology it used. Potential concerns with IAP includes the potential
data breaches and not getting
consent of the deceased. IAP must have a strong foundation and responsibility against data breaches and hacking in order to protect personal information of the dead, which can include voice recording, photos, and messages. With deepfakes, industries can cut the cost of hiring actors or models for films and advertisements by creating videos and film efficiently at a low cost just by collecting a series of photos and audio recordings with the consent of the individual. Potential concerns with deepfakes is that access is given to virtually anyone who downloads the different apps that offer the same service. With anyone being able to access this tool, some may maliciously use the app to create revenge porn and manipulative videos of public officials making statements they will never say in real life. This not only invades the privacy of the individual in the video but also brings up various ethical concerns.
Voice cloning Voice cloning is a case of the
audio deepfake methods that uses
artificial intelligence to generate a clone of a person's voice. Voice cloning involves
deep learning algorithm that takes in voice recordings of an individual and can
synthesize such a voice to the point where it can faithfully replicate a human voice with great accuracy of tone and likeness. Cloning a voice requires high-performance computers. Usually, the computations are done using the
Graphics Processing Unit (GPU), and very often resort to the
cloud computing, due to the enormous amount of calculation needed. Audio data for training has to be fed into an artificial intelligence model. These are often original recordings that provide an example of the voice of the person concerned. Artificial intelligence can use this data to create an authentic voice, which can reproduce whatever is typed, called
Text-To-Speech, or spoken, called Speech-To-Speech. This technology worries many because of its impact on various issues, from political discourse to the rule of law. Some of the early warning signs have already appeared in the form of phone scams and fake videos on social media of people doing things they never did. Protections against these threats can be primarily implemented in two ways. The first is to create a way to analyze or detect the authenticity of a video. This approach will inevitably be an upside game as ever-evolving generators defeat these detectors. The second way could be to embed the creation and modification information in software or hardware. This would work only if the data were not editable, but the idea would be to create an inaudible watermark that would act as a source of truth. In other words, we could know if the video is authentic by seeing where it was shot, produced, edited, and so on. Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used), ease of use, and substantial improvements to current text-to-speech implementations have been lauded by users; however, some critics and
voice actors have questioned the legality and
ethicality of leaving such technology publicly available and readily accessible. Although this application is still in the developmental stage, it is rapidly developing as big technology corporations, such as
Google and
Amazon are investing vast amounts of money for the development. Some of the positive uses of voice cloning include the ability to synthesize millions of audiobooks without the use of human labor. Also, voice cloning was used to translate podcast content into different languages using the podcaster's voice. Another includes those who may have lost their voice can gain back a sense of individuality by creating their voice clone by inputting recordings of them speaking before they lost their voices. On the other hand, voice cloning is also susceptible to misuse. An example of this is the voices of celebrities and public officials being cloned, and the voice may say something to provoke conflict despite the actual person has no association with what their voice said. In recognition of the threat that voice cloning poses to privacy, civility, and democratic processes, the Institutions, including the
Federal Trade Commission,
U.S. Department of Justice and
Defense Advanced Research Projects Agency (DARPA) and the Italian
Ministry of Education, University and Research (MIUR), have weighed in on various audio deepfake use cases and methods that might be used to combat them. == Constructive uses ==