Accessibility Metadata and Learning Objects

Pete Rainger

Introduction

Within any group of people there will be a wide range of skills, abilities (or disabilities), learning styles and preferences that will lead to different requirements. Learners may face barriers to their learning experience as a result of these "accessibility" problems, presented by a multimedia learning object. However, these barriers can now potentially be identified in advance and accommodated, by evaluating, analysing and exchanging accessibility information.

'Metadata' in the context of a learning object is data or information about that resource. In its simplest form, metadata could be understood to be an electronic record, containing data on particular resources, much like a bibliographic reference card, which describes a book in a library.

IMS (an international specifications development body) have now developed a set of metadata specifications called Access-For-All. There are two parts, the first known as ACCLIP describes the accessibility needs of a learner (i.e., a user profile) and the second part is known as ACCMD which describes the accessibility properties of a learning object (i.e. a resource profile).

The Access-For-All metadata specifications provide a means to enable a managed learning environment (MLE) system to match the accessibility properties of a resource to the needs of a learner. As an interoperable specification, it can be used across different institutional MLEs and platforms, allowing the potential for institutions and learning resource repositories to share accessibility information.

The Access-For-All specifications can be used to assist educators and learners in the discovery of resources. Systems implementing the specifications are able to automatically select the appropriate resource(s) for a particular learner, when available, thereby providing the user with resource(s) that meet their individual accessibility needs. When a mismatch of needs and requirements is found, learners can be pointed towards alternative versions of the learning resource (that might better suit their needs) or a completely different learning resource that would fulfil the same learning objectives.

Access-For-All meta-data also provides a means to support the substitution and augmentation of a resource with an equivalent or supplementary resource as required by the accessibility needs and preferences of a user's ACCLIP profile.

What does the Access-For-All metadata describe?

The following section presents a simplified description of some of the core structures of Access-For-All metadata.

Using Access-For-All learning objects can describe what type of media or interactions they contain:

NameDescription
has VisualAn indication of whether or not the resource contains visual information (i.e. images, animation, video etc).
has AuditoryAn indication of whether or not the resource contains auditory information (i.e. sound clips, video etc).
has TextAn indication of whether or not the resource contains text.
has TactileAn indication of whether or not the resource contains tactile interaction (e.g. a remote control simulation).

For each media type they can describe what alternatives the resource contains:

Alternatives to Visual Media
NameDescription
audio DescriptionIndicates whether or not there is an audio description available.
alt TextIndicates whether or not the learning resource contains (short) alternative text descriptions for visual media.
long DescriptionIndicates whether or not the learning resource contains alternative long text descriptions for visual media.
colour AvoidanceDeclares whether the learning resource uses particular colours, or a combination of colours (e.g. Red, Red / Green or Maximum Contrast).
Alternatives to Text Media
NameDescription
graphic AlternativeAn indication of whether or not the learning resource contains graphical alternatives to the text.
sign LanguageAn indication of whether or not the learning resource contains sign language alternatives to the text.
Alternatives to Auditory Media
NameDescription
caption TypeAn indication of whether or not the learning resource contains captions for the auditory media. It also describes whether the captions are enhanced, verbatim OR is for reduced reading level.
sign LanguageAn indication of whether or not the learning resource contains sign language alternatives to the auditory media.
EARL Statements

Note: EARL, Evaluation And Repair Language, is a W3C language for describing test results of an evaluation performed on a digital resource such as a web site. One application of EARL is to use it to describe the results of an accessibility evaluation of a web site or e-learing resource, and since EARL is machine-readable language, the information it contains could potentially be processed by a web server, browser or assistive technology to enhance the accessibility of the resource.

Each learning object can be linked to separate EARL report(s) that could describe the feasibility of transforming the display or presentation (e.g. background and text colour) and control features (e.g. keyboard accessibility) of the resource on-the-fly. These reports could also be used as a mechanism to document the conformance of the learning object to a standard (e.g. an institutional accessibility policy or the W3C web content accessibility guidelines).

How could the Access-For-All metadata be used?

One of the most powerful features of the system is that, learning resources can be substituted and/or augmented with an equivalent or supplementary learning resource based on a learners accessibility needs.

Here are some examples:

  • Learners with a visual impairment may have difficulty interpreting images. They might require that graphical material be augmented with alternative text descriptions (and with long descriptions where the material has a high contextual educational significance).
  • Learners who are Deaf may find that they prefer to learn using their primary language of sign language. They may require that the text content be augmented with a sign language interpretation.
  • Learners who have English as a second (or other) language may find that videos and audio clips containing spoken English is difficult to learn with. Also learners working in a noisy environment may also find it hard to hear audio. They may require that all auditory media is augmented with text captions.
  • Learners with a visual learning style and a difficulty with reading may prefer to learn procedures by using diagrams and flow charts. They may require that lengthy text descriptions of procedures be substituted by a graphical alternative.

Possibilities for the future

For a long time now, it has been known that multimedia learning resources have the potential to support a wider group of learners by providing content in a medium that supports their strengths (e.g. a visual learning style). The difficulty has been that multimedia when used inappropriately can hinder a learner, by possibly concentrating on their weaknesses (e.g. a sensory requirement). Using the Access-For-All metadata system could really bring a new level of power to rich multimedia learning resources by putting the 'multi' back into multimedia.

The Access-For-All metadata system makes it possible to evaluate the match or mismatch of multimedia resources to the needs of a particular learner, whether that be a learner with a disability, a preference for a particular learning style or with needs dictated by a difficult working environment. In the context of e-learning in Higher Education, this means that if the needs of a particular student were known, the suitability of the available course material could, if evaluated in advance, help institutions meet the requirements of DDA.

Pete Rainger runs Key2Access, an accessibility and assistive technology consultancy. His background is as an assistive technologist, researching new technologies to support students in Further and Higher Education. He later became a member of the education research faculty at the University of Sussex, working for the JISC TechDis service. This lead him to work on the development of accessible e-learning, looking at both large national scale productions (NLN Materials) and smaller hands on practitioner based materials. Peter led the TechDis Accessibility Metadata research project, working with international groups to help develop the first specifications. His interests include: inclusive learning design, accessibility skills development, accessibility metadata and innovative uses of assistive technology. He has personal experience of blindness, visual impairment, hearing impairment and Dyslexia, which allow him to bring a unique perspective to the field.


Related Sites

Accessibility Metadata Project (TechDis)
The aim of the Accessibility Metadata Project was to develop an accessibility metadata specification for e-content resources such as web and intranet materials, and e-learning materials.
CEN ISSS Learning Technologies Workshop Accessibility Properties for Learning Resources
CEN-ISSS Learning Technologies Workshop Accessibility Properties for Learning Resources (APLR) is a European group looking at the integration of accessibility metadata with the Learner Object Model (LOM).
Evaluation and Report Language (EARL) 1.0 (W3C)
The first version of Evaluation and Report Language (EARL), produced by the W3C.
IMS Accessibility
The IMS Glogal Learning Consortium's resources relating to accessibility and e-learning - including specifications, Information Models and best practice guides.