Functional Requirements for 3 New Standards Published
The international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 21st General Assembly. Among the outcomes is the approval of three Use Cases and Functi
Geneva(11 Jul 2022)
Neural Network Watermarking (MPAI-NNW): will provide the means to measure, for a given size of the watermarking payload, the ability of 1) the watermark inserter to inject a payload without deteriorating the NN performance, 2) the watermark detector to recognise the presence and the watermark decoder to successfully retrieve the payload of the inserted watermark, 3) the watermark inserter to inject a payload and the watermark detector/decoder to detect/decode a payload from a watermarked model or from any of its inferences at a measured computational cost.
- Geneva, Switzerland – 22 June 2022. The international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 21st General Assembly. Among the outcomes is the approval of three Use Cases and Functional Requirements documents for AI Framework V2, Multimodal Conversation V2 and Neural Network Watermarking V1.
This milestone is important because MPAI Principal Members intending to participate in the development of the standards can develop the Framework Licences of the three planned standards. The Framework Licence has been devised by MPAI to facilitate the practical availability of approved standards (see here for an example). It is a licence without critical data such as cost, dates, rates etc. MPAI is now drafting the Calls for Technologies for the 3 standards and plans to adopt and publish them on 2022/07/19, the 2nd anniversary of the launch of the MPAI project.
AI Framework (MPAI-AIF) V1 specifies an infrastructure enabling the execution of implementations and access to the MPAI Store. V2 will add security support to the framework and is the next step following today’s release of the MPAI-AIF V1 Reference Software.
Multimodal Conversation V1 Enables human-machine conversation emulating human-human conversation. V2 will specify technologies supporting 5 new use cases:
1. Personal Status Extraction: provides an estimate of the Personal Status (PS) – of a human or an avatar – conveyed by Text, Speech, Face, and Gesture. PS is the ensemble of information internal to a person, including Emotion, Cognitive State, and Attitude.
- Personal Status Display: generates an avatar from Text and PS that utters speech with the intended PS while the face and gesture show the intended PS.
- Conversation About a Scene: a human holds a conversation with a machine about objects in a scene. While conversing, the human points their fingers to indicate their interest in a particular object. The machine is helped by the understanding of the human’s PS.
- Human-Connected Autonomous Vehicle (CAV) Interaction: a group of humans converse with a CAV which understands the utterances and the PSs of the humans it converses with and manifests itself as the output of a Personal Status Display.
- Avatar-Based Videoconference: avatars representing humans with a high degree of accuracy participate in a videoconference. A virtual secretary (VS) represented as an avatar displaying PS creates an online summary of the meeting with a quality enhanced by the virtual secretary’s ability to understand the PS of the avatar it converses with.
MPAI will hold four online presentations of the documents on the following dates:
MPAI-MMC will be presented in two sessions because of the number and scope of the use cases and of the supporting technologies.
- AI Framework V2 (MPAI-AIF) on 2022/07/11T15:00UTC. Register on https://bit.ly/3HI4PIT
- Multimodal Conversation V2 (MPAI-MMC) 2022/07/07T14:00UTC. https://bit.ly/3bhQIha
- Multimodal Conversation V2 (MPAI-MMC) 2022/07/12T14:00UTC. Register on https://bit.ly/3OgEZ0S
- Neural Network Watermarking (MPAI-NNW) 2022/07/12T15:00UTC. Register on https://bit.ly/3QEZSEH
Those intending to attend a presentation event are invited to register at the link above.
MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity supporting the MPAI mission may join MPAI, if able to contribute to the development of standards for the efficient use of data.
So far, MPAI has developed 5 standards (see the list below), is currently engaged in extending two approved standards and is developing other 9.
Name of standard: AI Framework (approved). Acronym: MPAI-AIF
- link: https://mpai.community/standards/mpai-aif
- Brief description: Specifies an infrastructure enabling the execution of implementations and access to the MPAI Store.
Name of standard: Context-based Audio Enhancement (developed). Acronym: MPAI-CAE
-Brief description: Improves the user experience of audio-related applications in a variety of contexts.
Name of standard: Compression and Understanding of Industrial Data (developed). Acronym: MPAI-CUI
- link: https://mpai.community/standards/mpai-cui
- Brief description: Predicts the company performance from governance, financial, and risk data.
Name of standard: Governance of the MPAI Ecosystem (developed). Acronym: MPAI-GME
-Brief description: Establishes the rules governing the submission of and access to interoperable implementations.
Name of standard: Multimodal Conversation (approved). Acronym: MPAI-MMC
-Brief description: Enables human-machine conversation emulating human-human conversation.
Name of standard: Server-based Predictive Multiplayer Gaming (in development). Acronym: MPAI-SPG
-Brief description: Trains a network to compensate data losses and detects false data in online multiplayer gaming.
Name of standard: AI-Enhanced Video Coding (in development). Acronym: MPAI-EVC
-Brief description: Improves existing video coding with AI tools for short-to-medium term applications.
Name of standard: End-to-End Video Coding (in development). Acronym: MPAI-EEV
-Brief description: Explores the promising area of AI-based “end-to-end” video coding for longer-term applications.
Name of standard: Connected Autonomous Vehicles (in development). Acronym: MPAI-CAV
-Brief description: Specifies components for Environment Sensing, Autonomous Motion, and Motion Actuation.
Name of standard: Avatar Representation and Animation (in development). Acronym: MPAI-ARA
- Brief description: Specifies descriptors of avatars impersonating real humans.
Name of standard: Neural Network Watermarking (in development). Acronym: MPAI-NNW
-Brief description: Measures the impact of adding ownership and licensing information in models and inferences.
Name of standard: Integrative Genomic/Sensor Analysis (in development). Acronym: MPAI-GSA
-Brief description: Compresses high-throughput experiments data combining genomic/proteomic and other.
Name of standard: Mixed-reality Collaborative Spaces (in development). Acronym: MPAI-MCS
-Brief description: Supports collaboration of humans represented by avatars in virtual-reality spaces called Ambients
Name of standard: Visual Object and Scene Description (in development). Acronym: MPAI-OSD
-Brief description: Describes objects and their attributes in a scene and the semantic description of the objects.
Visit the MPAI web site, contact the MPAI secretariat (email@example.com) for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media:
- LinkedIn https://www.linkedin.com/groups/13949076/
- Twitter https://twitter.com/mpaicommunity
- Facebook https://www.facebook.com/mpaicommunity
- Instagram https://www.instagram.com/mpaicommunity
- YouTube https://youtube.com/c/mpaistandards
Most important: join MPAI, share the fun, build the future.