Interactivity technologies, key factor for the interactive documentary

In All posts, Research Articles by Arnau Gifreu

The interactive documentary is a genre that is based on the basis of documentary images but that would not exist without the interactive technology. That is why we consider it appropriate to present this introduction and contextualization that marked the birth of such technologies a few decades ago.The history of interactivity technologies has been extensively studied in the research by Ignasi Ribas (Ribas 2000), which we will summarize here.

The first truly interactive audiovisual applications were produced within the LaserVision optical videodisc format during the 1980s. This system was developed by the company Philips, in the 1970s. It was an analogue format and difficult to manage, but connected people from different areas within interactive communication. Reginald T. Friebus had already patented a means of recording sound and images in color on a disc for optical systems in 1929, but it was not until the invention of the laser, with its extraordinary ability to concentrate a beam of light, that it became possible for one side of the disk to contain a reasonable amount of audio-visual programming (Ribas, 2000:29-30).

Of all the optical systems that competed with each other during the 1970s and 1980s (there were over 25 in 1976), the LaserVision prevailed due to its resistance and ease of use. Its CLV format, designed for watching films in a heavily linear way, never succeeded, as it competed with home VTRs in recordable and erasable tape. The CAV (Constant Angular Velocity) format made the first interactive audiovisual applications possible. All the functions that the CAV format allowed could be remote controlled. The secret of the CAV format lay in its positioning of a single image on every revolution of the disc. As a result, by turning at a constant speed of 25 revolutions per second in the PAL system, and while the laser reader head made a small movement in a radial direction during its revolution, it could play moving video images. However, it was also able to produce a perfect pause of unlimited duration, by simply stopping the head to reproduce the same image 25 times per second. It could also fast forward and rewind with the head’s speed and direction control in the radial direction of the disk. And most importantly, all the images could be numbered from 1 to 54,000, on one side – with a digital code mixed with the image, and it could take the head to any image within a few tenths of a second. The combination of this random access and perfect and unlimited pause made the LaserVision CAV format the paradigm for the first interactive audiovisual applications. The 54,000 revolutions on one side could be used for the same number of different images, to provide 36 minutes of PAL video for any combination of still and moving image within these limits, with the addition of two commutable channels of sound (Ribas, 2000:30).

This was level I of interactivity, or with an external computer, level III. As Ribas points out (2000:29-30), a program on this computer could add fragments of video and sound or video quality still images to its intrinsic interactive capabilities, albeit on a monitor other than that of the computer, as the image and sound stored in the LaserVision were still analogue. The system was indeed huge, difficult to manage and its distribution among the general public was inconceivable: it required a computer with a monitor, a videodisc reader with a television and a connection between the two consisting of a special cable and software to manage it. Furthermore, if computer output to the video screen was required, a special synchronization card, which was expensive, difficult to manage and not standardized, was also needed. Another major problem was the incompatibility between the NTSC and PAL television systems, which prevented the videodisc from becoming widespread.

This technology was widely used in information outlets, catalogues of large companies, stores and in training applications for skills benefiting from the possibility of including realistic moving images: flight simulators, for dangerous or costly repairs, etc., especially in the U.S. market. It also had a very important market in education, once again in the United States, and this was the format in which were the first applications for cultural dissemination were produced. The role of the U.S. production company Voyager was especially important in the cultural dissemination market in general and that of art in particular, using the videodisc. As discussed by Ribas in his article “Integrating media within interactive discourse: the case of cultural dissemination” (2009), excellent products about artists such as Van Gogh or Muybridge, or about museums such as “The Art Institute of Chicago” and “The National Gallery of Art” in Washington, among others, were the first major conceptual change in the way that culture was disseminated. Also worthy of mention is the Société ODA in Paris, which produced excellent videodiscs about the Louvre and Orsay museums, which were the forerunners of today’s vast wealth of French cultural interactive products.

The emergence in the early 1990s of specific digital video formats, initially requiring special hardware, such as the Intel DVI, and built in soon afterwards at the expense of quality that was initially at the permissible limits of the management of any powerful microprocessor, changed things dramatically. Hypertext generation systems gradually incorporated high quality still images, sounds, and even digital video, and became what we now call author languages.

The ability to digitize all the media around a multimedia application meant that the term took on a new meaning, guided by the shift from the concept of accumulation to that of integration, and this is what made all the multimedia and hypermedia interactive applications we have today possible. Digitization provides a number of fundamental advantages compared to the situation in the videodisc era: they all come from the uniform treatment of the various media within the digital environment, because all the information is stored in files that the system manages in the same way. This means that computerised integration is simple and uniform and the development languages of applications do not have to make basic distinctions based on the medium they are incorporating. Naturally, the hardware necessary has been simplified and has become a single computer with multimedia management capabilities, and in terms of applications, the content and its structure are now in a single digital medium (Ribas, 2000:32). In the late 1980s and early 1990s, when the multimedia digitization of the immediate future was becoming apparent, all the ideas and initiatives in the hypertext and interactive videodisc fields began to converge. It was the era of the first associations and conferences on the subject and the resurgence of the most important ideas on which the theory of interactive communication is based today (Ribas, 2000:33).

The adoption of storage devices is the reflection of a more widespread transfer of ideas on interactive, non-sequential ways of accessing information from the computer environment, which had begun some years previously in the audiovisual production field. This seems to confirm the fact that in the early years after digital integration, when digital video compression algorithms were not as effective as they are today and the CD-ROM, the optical device par excellence was based on the low storage density of the old audio CD, no one thought of creating audiovisual interactive applications based on linear storage devices. The enormous potential of non-linearity, revealed by the videodisc and facilitated by digital media, meant that everyone preferred to work with very small and low quality moving images, or with special decoding hardware, rather than returning to the paradigm of linearity (Ribas, 2000:34).

Also available on these links some of my presentations and communications at events and conferences, as performed for the I-Docs Symposium (Bristol, 2011) and the McLuhan Galaxy Conference (Barcelona, ​​2011).


Arnau Gifreu Castells
Researcher, Professor and Producer
Universitat Ramón Llull / Universitat de Vic

Ribas, J. I. (2000), Caracterització dels interactius multimèdia de difusió cultural. Aproximació a un tractament específic, els “assaigs interactius” [work research Pre PhD], Barcelona: Universitat Pompeu Fabra. Communication Faculty.

Recommended citation:

Gifreu, Arnau (2010), El documental multimèdia interactiu. Per un proposta de model d’anàlisi. [Treball de recerca]. Departament de Comunicació. Universitat Pompeu Fabra, pp 79-82.

Gifreu, Arnau (2010). The interactive multimedia documentary. A proposed model of analysis. [Research Pre PhD]. Department of Communication. Universitat Pompeu Fabra, pp 79-82.