January 2009


eh aktifin yahoo massanger dong

Panduan untuk mengetahui multimedia harus dimulai dengan definisi atau pengertian multimedia. Dalam industri elektronika, multimedia adalah kombinasi dari komputer dan video (Rosch, 1996) atau multimedia secara umum merupakan kombinasi tiga elemen yaitu, suara, gambar dan teks (Mc Cormick, 1996) atau multimedia adalah kombinasi dari paling sedikit dua media input atau output dari data, media ini dapat berupa audio (suara,musik), animasi, video, teks, grafik dan gambar (Turban dkk, 2002) atau multimedia merupakan alat yang menciptakan presentasi yang dinamis dan interaktif yang mengkombinasikan teks, grafik, animasi, audio dan gambar video (Robin dan Linda, 2001).

Definisi lain dari multimedia yaitu dengan menempatkannya dalam konteks, seperti yang dilakukan oleh Hoftsteter (2001), multimedia adalah pemanfaatan komputer untuk membuat dan menggabungkan teks, grafik, audio, video dan animasi dengan menggabungkan link dan tool yang memungkinkan pemakai melakukan navigasi, berinteraksi, berkreasi dan berkomunikasi. Dalam definisi ini terkandung empat komponen penting multimedia. Pertama, harus ada komputer yang mengkoordinasi apa yang dilihat dan didengar yang berinteraksi dengan kita. Kedua, harus ada link yang menghubungkan kita dengan informasi. Ketiga, harus ada alat navigasi yang memandu kita, menjelajah jaringan informasi yang saling terhubung. Keempat, multimedia menyediakan tempat kepada kita untuk mengumpulkan, memproses dan mengkomunikasikan informasi dan ide  kita sendiri. Jika salah satu komponen tidak ada, maka bukan multimedia dalam arti luas namanya. Misalnya jika tidak ada komputer untuk berinteraksi maka itu namanya media campuran, bukan multimedia. Jika tidak ada link yang menghadirkan sebuah struktur dan dimensi, maka namanya rak buku, bukan multimedia. Kalau tidak ada navigasi yang memungkinkan kita memilih jalannya suatu tindakan maka itu namanya film, bukan multimedia. Demikian juga jika kita tidak mempunyai ruang untuk berkreasi dan menyumbangkan ide sendiri, maka namanya televisi, bukan multimedia. Dari definisi diatas, maka multimedia ada yang online (internet) dan multimedia yang offline (tradisional). Unsur-unsur MultimediaUnsur – unsur pendukung dalam multimedia antara lain :1.   TeksBentuk data multimedia yang paling mudah disimpan dan dikendalikan adalah teks. Teks merupakan yang paling dekat dengan kita dan yang paling banyak kita lihat. Teks dapat membentuk kata, surat atau narasi dalam multimedia yang menyajikan bahasa kita. Kebutuhan teks tergantung pada kegunaan aplikasi multimedia. Secara umum ada empat macam teks yaitu teks cetak, teks hasil scan, teks elektronis dan hyperteks.2.   GrafikAlasan untuk menggunakan gambar dalam presentasi atau publikasi multimedia adalah karena lebih menarik perhatian dan dapat mengurangi kebosanan dibandingkan dengan teks. Gambar dapat meringkas dan menyajikan data kompleks dengan cara yang baru dan lebih berguna. Sering dikatakan bahwa sebuah gambar mampu menyajikan seribu kata. Tapi ini berlaku hanya ketika kita biasa menampilkan gambar yang diinginkan saat kita memerlukannya. Multimedia membatu kita melakukan hal ini, yakni ketika gambar grafis menjadi objek suatu link. Grafis sering kali muncul sebagai backdrop (latar belakang) suatu teks untuk menghadirkan kerangka yang mempermanis teks. Secara umum ada lima macam gambar atau grafik yaitu gambar vektor (vector image), gambar bitmap (bitmap image), clip art, digitized picture dan hyperpicture.3.   Bunyi atau SoundBunyi atau sound dalam komputer multimedia, khusunya pada aplikasi bidang bisnis dan game sangat bermanfaat. Komputer multimedia tanpa bunyi hanya disebut unimedia, bukan multimedia. Bunyi atau sound dapat kita tambahkan dalam produksi multimedia melalui suara, musik dan efek-efek suara. Seperti halnya pada grafik, kita dapat membeli koleksi sound disamping juga menciptakan sendiri. Beberapa jenis objek bunyi yang biasa digunakan dalam produksi multimedia yakni format waveform audio, compact disk audio, MIDI sound track dan mp3.4.  VideoVideo adalah rekaman gambar hidup atau gambar bergerak yang saling berurutan. Terdapat dua macam video yaitu video analog dan video digital. Video analog dibentuk dari deretan sinyal elektrik (gelombang analog) yang direkam oleh kamera dan dipancarluaskan melalui gelombang udara. Sedangkan video digital dibentuk dari sederetan sinyal digital yang berbentuk yang menggambarkan titik sebagai rangkaian nilai minimum atau maksimum, nilai minimum berarti 0 dan nilai maksimum berarti. Terdapat tiga komponen utama yang membentuk video digital yaitu frame rate, frame size dan data type. Frame rate menggambarkan berapa kali bingkai gambar muncul setiap detiknya, sementara frame size merupakan ukuran fisik sebenarnya dari setiap bingkai gambar dan data type menentukan seberapa banyak perbedaan warna yang dapat muncul pada saat bersamaan.5.  AnimasiDalam multimedia, animasi merupakan penggunaan komputer untuk menciptakan gerak pada layar. Ada sembilan macam animasi yaitu animasi sel, animasi frame, animasi sprite, animasi lintasan, animasi spline, animasi vector, animasi karakter, animasi computational dan morphing.

Ewald M. Jarz Department of Information Systems University of Innsbruck Austria
Ewald.Jarz@uibk.ac.at Tel. +43-512-507-7651 or 7652
1 Introduction
The strong technical orientation of the previous multimedia evolution shows a lack of theoretical foundation. Both during the evolution and application of multimedia technology, well-founded theoretical concepts are missing.1 Approaches for a new, yet unreflected multimedia paradigm can already be recognized in literature.
On the one hand, the chance for a higher quality of information representation is seen,2 on the other hand scenarios with dramatic social effects of the multimedia technique are made.3 The intention of this contribution is to consider multimedia from three perspectives. The first point of view comes from a science theory. The definition of the term and of the environment associated with it is marked out. A classification of multimedia information and interaction types is given also as an overview of the problem fields of multimedia. The second perspective moves the human being as Users of multimedia systems into the foreground. In this case my assumption is that information is transported by multimedia systems. The term information is nonetheless ambiguous. On the one hand, it stands for the process of information, on the other hand, it stands for information itself as an entity. The term information system is normally used in the first sense. However, the function of Information always requires the human being as interpreters of information. From the viewpoint of the human, information is in the final analysis knowledge, which is stored in some form in the human. Therefore the second point of view is concerned with the knowledge acquisition of the human being through multimedia systems. Educational and psychological aspects are shown. The third point of view frees itself of theoretical considerations and gives concrete instructions for designing multimedia systems.
1 Kerres, 1990, p. 71 f 2 Hoogeveen, 1995, p. 351 f 3 Brauner; Bickmann, 1994 and Titze, 1993, p. 15
2 Definition / What is Multimedia?
2.1 Multimedia in a broader sense.
The term multimedia is defined differently by many authors.4 Extensive consensus is that the processing of the medias must occur completely in a integrated manner on a digital electronic basis and independent from each other. A difference of the definitions refers at the end to the question which medias are multiple combined.
Medium in the material sense designates an “agent” between two or more communication partners.5 The sender of a message must use a medium in order to send the message. A medium is first an information carrier. A sender produces a message by acting on a specific medium. For example sound is a medium which is formed by a speaker to be received by a listener. Further medias are for instance light, liquids, surfaces, solid matter bodies and so on. Within the medias, one can distinguish between concrete medias and abstract medias.6 Concrete medias are carrier medias for instance light could be a carrier media. Abstract medias use these carrier medias for communication like a reflecting surface. Abstract medias shape concrete medias in a way that these concrete ones contain information. Abstract medias (e.g. graphics) can in turn be a foundation for even more abstract medias (e.g. writing), therefore also becoming a concrete medium and so on. These connections result in a hierarchy of using medias. After the structuring of different medias with a sender, receiver and medium, it becomes clear that there can be as many medias as desired, depending on the level of detail and the intended purpose. The question, when one can speak of different medias as “multi”-media leads to infinitely many solutions after this division. As an alternative, the distinction of the medias after the human sense organs can be used.
On the other hand there stands the classification of the ISO (International Standardization Organization), in which the perception medias are only one of six kinds of medias and many other multimedia-definitions refer to one of the five others.7 If “medium” is an information carrier for communication – however information can only be activated by a subject (the human being, unlike the opinion in the information concept of Shannon) – only the perception medias are suitable as a valid multimedia definitions, since the human being is implied only in them. A division of the medias by the human sense organs appears quite useful since the human being occupies a central position here. The stimulus arriving from the environment can only be received through the sense organs and processes for information about the modifications of the environment. Illustration 1 represents the communication form between human and machine.
4 e.g.: Frater; Paulißen, 1994, p 17 or Förster; Zwerneman, 1993, p. 10 f or Gertler, 1995, p. 8 ff, Laurel, 1994a, p. 346 or Börner; Schnellhardt, 1992, p. 18 or Hoogeveen, 1995, p. 348 or Phillips, 1992, p. 25 f or Förster; Zwerneman, 1993, p. 10 or Steinmetz, 1993, p. 19, Messina, 1993, p. 19 and Heinrich; Roithmayr, 1995, p. 359/ Haak; Issing, 1992, p. 24/ Klingberg, 1993, p. 144 ff/ Ambron; Hooper, 1990, p. xi and many more. A more detailed work about all definitions and their different approaches can be found in Jarz, 1997b, p. 12ff. 5 Heinrich; Roithmayr, 1995, p. 341 6 Meyer-Wegener, 1991, p. 20 f 7 ISO/ IEC, 1993
computer human
Receiver
sender
information
medium
Re- action
ceiver sender
medium perzeptor
outputdevice
effektor
inputdevice
Illustration 1: human-machine-communication
But this point of view is incomplete. The information, which the machine supplies for the
human being, comes initially from a human being too. Consequently, illustration 1 is actually
only one part of a human-machine-human-communication.
If the Information supplied by the computer is produced from itself and not from another
human as a sender then it is actually a nature-machine-human-communication, because the
computer processes data of the reality for the human in this way that it represents information
for him (e.g. measurements, balance sheet analysis, statistical evaluations and so forth).
Illustration 1 is incomplete here too.
If the transmitting and receiving human is the very same person (as for example, during text
processing) one can not speak of a pure human-machine-communication since this
constellation is actual not anything other then a human-machine-human-communication,
where the sender and receiver are the same person.
These considerations lead to the conclusion that the pure human-machine-human
communication does not really exist. It is only an incomplete section of other communication
forms. Except the machine is, as in the thought experiment of Weizenbaum, an autonomous
existence, that cooperates collectively with other forms of the same kind.8 In this case this
machine would also be something like a way of life and this human-machine communication
could be compared with a pure person-to-person communication.
However, the human-machine communication form can be used for theoretical considerations.
The most important feature is the action that initiates the machine to produce information.
This action is either a command, that can also consist of a series of commands, or a temporal
event. Due to this command, other command chains (programs) can be read back, which
makes the machine sending information. It is decisive that the action or the command has
nothing to do with the information itself. The command only causes the representation of
information. This reciprocal action is described as interaction.
The kind and way the interaction occurs with the computer (keyboard, mouse, data glove,
body location and so on), can be called an interaction type.
The kind and way in which the representation of information following on it occurs (such as
text, image, movie, speech, and movement in the flight simulator and so forth) can be called
an information type.
8 Weizenbaum, 1990, p. 268ff
Multimedia in a broader sense is therefore the completely digital, integrated and independent processing of different information types, which can be controlled interactively by the user through one or several interaction types. This definition still approves a large bandwith of application areas. For more precise statements is the multimedia term delimited as “multimedia in a narrower sense”.
2.2 Multimedia in a narrower sense
The support of perception medias by input and output devices has been improved by developments from the field of virtual reality.9 But low-cost and market-maturity systems are mainly available for the visual and auditory perception. The restriction of multimedia onto these two fields is therefore rather a “bimedia”. However “multi” is the way of the codification of information. Text, graphics, image, movie and animation are all different codification’s within the same perception media: the visual perception. Only the representation of information is different. Therefore multimedia can be seen as a combination of different information representations – the information types. Multimedia in a narrower sense refers only to audiovisual information types and the standard communication equipment like keyboard, mouse and its derivatives.
2.2.1 Audiovisual information types
If the term information type takes the way of transmission between sender and receiver into consideration then the essential distinction between continuous and discreet information can be made. Continuous information is time-critical or dependent from information sections following on each other; the message is only valid and correct when the factor time is included.10 Discreet information’s are non-time-critical. The User can determine moment and duration of consideration. Auditory information representations are time-critical (continuous), image information is non-time-critical (discreet). But if several images follow each other (moving image), this result is time-critical again. Consequently, moving image is a separate information type. In addition to human language, there is even further (time-critical) auditory information for example noises (engine noises, bird chirp and so on) or music.
The corresponding information type could be designated subsequently as a general term for language, noises and music either comprehensively as audio or there is a separate information type for human language and a further for noises and music. Since language is very important for the human being and a lot of study on the field of language input and output has been done (in particular in the field of artificial intelligence), it seems to be useful to see the human language as a separate type of information. Accordingly, noise and music must be taken as a further information type at least together. The word “sound” can serve as a general term for it.11 In the case of the discreet visual information types, the alphanumeric text seems to occupy a special position since it is the visual expression of the most essential communication form between people: the language. Therefore, alphanumeric text can be seen as its own information type. If text is the discreet counterpart for language, the score can be understood
9 especially the development of datagloves with tactile feedback and so on in the medicine and flight simulators for pilot training. 10 Steinmetz, 1993, p. 14 11 The border between noise and music is not very sharp anyway, especially with respect to different subjective, aesthetical sensation.
as a discreet counterpart to music and/or noise (sound) and therefore a separate information type. In this case, the note values of a score are comparable with the alphabetic characters of a text. The representation form score does not appear suitable to represent the information noise (or better: all other auditory information except the human language and music) at the first sight. The existing possibility of MIDI- and synthesizer technique allows the codification of several noises as note values. Like text compared to language causes a loss in the value of information (keyword rhetoric/ accentuation), the loss of information of the score compared to music and noise can be seen.
All other discreet, visual information representations can be described as an image where one distinguishes often between graphics processing (and/or graphic data processing) and image processing.12 This distinction refers above all to the different processing technology: Vector orientation in the case of graphics and point orientation at images. Despite of the already strong mixing of both processing techniques at corresponding graphic programs “image” is described as an arranged amount of picture elements (pixels) and “graphics” as drawed, schematic chart representation of information even if with vector oriented graphics photorealistic images can be made already. The retention of the division image/ graphics is reasonable if a photorealistic representation is meant by image and a schematic representation is meant by graphics even if the image was made vector oriented and the graphic point oriented. Under this aspect, a visual, discreet information is represented either as information type image in a (photo)realistic way or as information type graphics in a schematic way. The continuous counterpart for the discreet information type image is the moving image. Photorealistic moving images are in general described as movies. The term for schematic moving images is trick movie or animation. Therefore in continuous information representations can be distinguished between the (photo)realistic information type movie and the schematic information type animation even if the movie is made vector oriented and the animation point oriented. The second essential distinction in the way of information representation is – in addition to the temporal dimension – the spatial dimension. People receive audiovisual information also spatially because of the two by two arrangement of the sense organs eye and ear. Traditional visual output devices like monitor and paper support only two dimensions. Perspective representations (“2 1/2- D”) try to simulate the third dimension.
However, real spatial visual representations are only possible with stereoscopic output devices.13 The auditory information reception can be seen analog. The most auditory output devices provide only a two-dimensional representation. Stereo sound systems try to simulate the third dimension but real spatial hearing experience can be produced only by sound systems beyond quadrophony which support sound sources from left/ right, front/ rear and above/ below.14 Consequently, the representation type 2 D, 2 1/2 D or 3 D, depending on the output device can cause both visual and auditory information types. The third essential distinction in information representation is the inclusion of the acting human being. In the case of the information types described up to now, the representation of information is independent of the actions of the User. Only the viewing moment, duration and time sequences can be determined by the User. These information types can be designated as passive.
12 Heinrich; Lehner; Roithmayr, 1994, p. 216 13 Aukstakalnis; Blater, 1994, p. 81ff 14 Wenzel, 1992
If the representation of information changes depending on the actions of the observer, one can
speak of an active information type. This form can be found in multimedia applications above
all at spatial visual representations.
If a body (e.g. a die) is represented, three faces are at the most simultaneously visible. If the
die can now be turned by the observer, the already existing but hidden information of the
remaining three faces is available for the observer. This information representation can be
designated as an object. In this case, the viewpoint of the observer (perspective) is fixed and
the position of the object can be modified in relation to it. This information type is to be taken
as discreet, because there is no time- or order-critical continuum necessariy for the
modification of the position. If the position of the object during information representation
does not change but the viewpoint of the observer changes, then this is a further information
type. During the representation of only one object, no essential difference to the information
type object – with the exception of shadows – is recognizable. Only if several objects are
represented simultaneously, the change in the perspective of the observer becomes important.
The Information can be experienced only by a continuous change in the perspective of the
observer. This information type can be designated as “world” and is in particular used on high
performance systems in architecture and in the game sector.15
On account of the preceding considerations, the multimedia information types for audiovisual
information representations can now be listed. Illustration 2 shows the classification.
Text Speech
Score Sound
Graphics Animation
Image Movie
activ Object World
passive
2 D
2 ½ D
3 D
continuous
discreet
Time representation
Action representation
Space representation
auditory
visual
Illustration 2: Classification of audiovisual information types
The (vertical) convertibility of an information type to another one of the same perception
media can be seen as a criterion for the selectivity of this division. If an information type can
be converted simply, this information type is not unambiguous. The textual description of an
image for instance is only very difficult to convert into an image and vice versa. In all cases of
the classification in illustration 2 – with the exception of pure numeric values of the
information type text – is the convertibility within a perception media hardly attainable. The
15 e.g. Bauer, 1993 and Loeffler, 1993
conversion between image and graphics seems to be the most possible. But the pattern recognition processes necessary for it is quite complex too here and the present works to this supply only results at comparably simple examples.16 The (horizontal) convertibility of information types from discreet to continuous is easier between different perception medias.
Text conversion into speech (speech synthesis) is as available as speech recognition.17 Score conversion into music or noise is manageable through Midi engineering18 and also tone recognition with automatic notation is available for several instruments.19 However, continuous information types also include information, which must get lost by the conversion. Language reduced to text loses the accentuation and rhetoric, sound reduced to score loses the musical expression. The representation of information mostly occurs not only in just one information type. All information representations can yet be traced back to single information types or their combinations. The Classification is built up hierarchically from bottom to top. That means that every information type can include the information type lying under it and/or is built up from them. Every information type yet offers a representation of information, which contains more than the sum of the information types contained in it. In such a way, an image for example can also contain graphics, score and text or a movie also language and sound (e.g. a musical). The classification of the proposed systematic of information types is selective and has the advantage that during the design of multimedia information systems it clearly shows which basic (combination-) possibilities for the representation of information exists. The structure replaces the term “medium” in previous multimedia definitions by the term “information type”. On this basis, a theoretical multimedia paradigm can be developed, in which statements about mutual reciprocal actions of the single information types during the audiovisual information reception of the User are possible.
2.2.2 Interaction type
The techniques of the human-machine interface developed maturity, where the results of the research in different approaches are summarized in the first theoretical concepts.20 Another classification is presented here, which illuminates less technical but more functional aspects, shows a stronger degree of abstraction and defaults a framework for the design of multimedia applications.
The type of donator, the type of control and the type of action possibility can determine the possibilities of data entry of the User. The type of donator can be either a pointing device, which produces spatial references, or a symbol generator which causes predefined planned reactions. The type of control can be either physical or virtual.21 The type of action designates those variants which the User has in communication with the machine: either the selection of predefined items and/or the calling of predefined functions (static) or the varying of already available objects and/or input of new values (dynamic). Illustration 3 shows the connections.
16 e.g. Cakmakov, 1995 and Babu, 1995 17 Steinmetz, 1993, p. 39ff and p. 45ff 18 Steinmetz, 1993, p. 33ff 19 Frater; Paulißen, 1994, p. 232f 20 see the classification of mouse, touchscreen, forcescreen, 3-d trackball, dataglove and datahelmet and so forth in CardEtal, 1992, p. 224 ff 21 This distinction can be found too at Kerres in Issing; Klimsa, 1995, p 32 ff
select
call
change
virtual
physical
Action type
enter
static
Symbol donators:
•Button
•menu
•Hotspot…
Coordinate donators:
•Slider
•Control dial
•Scrollbar…
dynamic
Type of
control
virutal
physical
•Mouse
•Trackball…
•Light pen
•Dataglove
•Touchscreen…
•keyboard
•switcher…
•Speech control
•Body control
Control Set
Enter Move
Indirect
Direct
Selection Click
Symbol donator Coordinate donator
Illustration 3: Interaction type
The variants in the individual fields represent the interaction possibilities. The communication
with the computer first occurs via a physical interface. That can be either a symbol generator –
as for instance the keyboard – or a pointing device as for example the mouse. In this way,
either an action can immediately be caused (how for example pressing the escape key can
cause the termination of a movie) or virtual symbol generators or pointing devices can be
addressed. The most frequent case is clicking a – virtual – button with the mouse and therefore
the release of specific actions. A virtual button can also be pressed with the keyboard as for
instance by hitting the return-key for confirming the request for saving a file in a dialog box.
However, other symbol generators as for instance voice or gesture controllers can also call this
choice. Activating a virtual pointing device such as a control dial can also occur either through
physical pointing devices (setting with the mouse) or through physical symbol generators
(adjustment with key combinations).
The different interaction types can cause a planned reaction.
The planned reaction is identical, however, it becomes amenable to the user in different ways
which are differently suitable depending on the function of the planned reaction.
In this case, two ways of experiences of the User which are caused by communication with the
computer can be distinguished:22
First person experience
where the User has the feeling to release an action directly. A User has for instance while
activating a button on Touch-Screen the feeling to press the button directly and itself.
Second person experience,
where the User has the feeling that by his action a planned reaction is releases indirectly.
22 Laurel, 1986 and Laurel, 1993, p. 112 ff
While activating a button with a mouse, the User has the feeling that the button was pressed via clicking the button on the mouse. In this way, it becomes clear that the experiences, which a User has during the use of different input devices, are different, although the result of the action is identical. This distinction allows the division into direct (possibility of the first person experience) and indirect (possibility of the second person experience) physical donators. By interactions events arise which in turn supply the corresponding results. Table 1 shows the list from User-controlled events and its result.
Activated
Not activated
Event
Result
Event
Result
Pointing
In Motion
Control drag & drop set
Area code
move enter leave
Motion code
Device
Not in motion
Click/ DoubleClick down up
Position code
timer
Time difference
Symbol generator
down Wahl up Eingabe
Key code, Symbol code
timer
Time difference
Table 1: interaction events and results Activated means that the corresponding donator was released. A button is activated for example if it is pressed. At pointing devices activated means that the position, which the pointing device supplies, is marked. In such a way, pressing the mouse button on a specific place is the activation. Pointing devices can be activated and/ or in motion. If a pointing device is activated and if it is moved simultaneously then the interaction type “adjust” or “set” can be caused. In this case, a drag-and-drop event is also caused in some systems. An object becomes marked, taken to another place (drag) and put there (drop). This event provides a field code within which the movement occurred. If a pointing device is not activated and only moved it causes a movement event. This event provides a movement code, which indicates position, position modification rate and direction. Not all pointing devices provide a movement code. Touchscreen and illuminated pointer can not generate an enter- and leave-event since this is in their case always an activated event. If the position indicator achieves virtual objects (e.g. a button on the screen), it causes an entry event. If it leaves the object, a leave event is caused. So the mouse pointer causes an entry event for example if the mouse pointer achieves a button. This event causes for instance the mouse pointer is transformed into a hand with stretched forefinger. While leaving the button the leave event causes the change of the mouse pointer from the hand back to the arrow.
If the pointing device is not moved but only activated, so the interaction type click is caused. The event down (activated) provides the position code of the click. By combination of down and up (down, up, down+up = click, double-click, triple click and so on) different functions can be called at a position. If the pointing device is neither moved nor activated, only a temporal event (timer) can occur, that provides a time difference. If a symbol generator becomes activated, it causes the interaction type “select” or “input”. The event down (= pressing of a key) provides the key- or symbol code, depending on the type of the symbol generator. Different functions can be caused by combination of keys (e.g. shift key + key a = A). If the symbol generator is not activated, also in turn only a temporal event (timer) can
release a function and supply a time difference as a result. With the classifications of illustration 3 and table 1, the interaction variants of the users and their effects can be described completely. This division represents the basis for the development of multimedia applications. Development- and application environments must support the interaction events and the processing of its results.
3 Multimedia Theory
3.1 Educational Aspects
At the constuction of multimedia learning systems and mass information systems – in case of the technical-systemic point of view – the transported information is in the center of interest. In other paradigms (social, behavior oriented, psychological, didactic ones and so forth), this information is designated as knowledge. Knowledge is here the by people understood and stored information.23 From the learn-theoretical point of view, the disciplines pedagogy, education and didactics observe the multimedia evolution in part critical.
Knowledge psychology supplies concrete action instructions from the cognitive point of view.24 Multimedia is for media science and media-psychology the fusion of computer, television and telephone.25
The multimedia technique is meanwhile available on low-cost personal computers as well and no longer a major problem. The width of the abilities in order to make and edit all information types, which are necessary, is much more difficult. Table 1 shows a selection from occupational groups into which competence the variety of multimedia information types plays a role. For a multimedia production there are additional comprehensive network- and programming techniques, dramaturgical, media didactic and learn-psychological knowledge required. Lopuck says to this: “A jack of all trades is a master of It nnone.” 26 Therefore, the evolution of a multimedia application is from the beginning condemned for failing since billing is too little available for professionalization either or on the other hand forces the – authorized – question on it, whether such massive resource effort has in fact a corresponding benefit? The answer of this question applies to benefit and this is based again on the scope. Above all in two fields, multimedia technique is forced: in the field of the mass information medias (electronic newspaper, pay television, kiosk systems, presentations, advertising and so forth) and in the field of (virtual) learning. In the first field a benefit can easily be found, but in the field of learning systems this is much more difficult. However, both fields looked at more closely have many common characteristics.
23 Müller; Merbach, 1992 24 see e.g. the papers in Issing; Klimsa, 1995
25 see e.g. Hickethier; Zielsinski, 1991 or Brauner; Bickmann, 1994 26 Lopuck, 1996, p. 3
Discreet information type
Occupational group
Continuous information type
Occupational group
Text Setter Printer Writer
Speech Speaker Sound engineer
Score Composer Arranger
Sound Sound technician/ mixer Musician Singer Sound Effects
Graphics Layout man Desktop publisher Painter and draughtsman Graphics designer Illustrator
Animation Animator Modeller Computer animator Digital assistant
Image Photographer Retoucher Image editor
Movie Director Scriptwriter/ dramatic adviser Cameraman Actor Illuminator Cutter Digital effects & assistant
Object Architect Modeller Computer illustrator
World Architect Computer graphics Real-time specialist VR designer
Table 2: Competence of traditional occupational groups in the case of multimedia information types Objective is in both cases to transport information (knowledge) to people and to prepare it in such a way that the acquiring and applying of this knowledge is supported. In this way, the old interest in automated learning systems and the accompanying question about its effectiveness is woken again. In the 17th century, Comenius thought about the efficiency of the teaching/learning process. In his book “Orbis Sensualium Pictus”, which appeared in 1658 in Nuremberg, he argued that the subject should be provided via as many senses as possible. His fundamental book “Magna Didactica” influenced the design of teaching machines over centuries. Skinner developed the idea of “programmed learning” and formulated, in 1958, seven rules which were the basis of computer-assisted teaching machines developed in the sixties and seventies. These didactic software became known as CBT (computer-based training) and were in the beginning not very user friendly and required a connection to a mainframe. CBT developed a new market, that was flooded with monotonous and pedagogic useless programs. Therefore, it was not amazing that the initial euphoria deviated fast to a reasonable skepticism and the approach failed to a large extent.
With the evolution of high-powered hardware at small prices, new technical possibilities seemed to balance the shortcoming. The multimedia technique already comes close to the thought of Comenius on using several medias. But regardless of the way of information representation the way of the didactic conception stuck to the behavioristic approach. Many multimedia programs are still teaching programs and no learning programs in the first place, only with the additional problems, that further complexity fields must be overcome due to the variety of the multimedia information types. However, only a further possibility of information representation in the final analysis resulted through the multimedia technique, the problems of the pedagogy and of the didactics did not modify itself. The discussion which
information in which learning stage should have been presented experienced an enlargement
around the aspect of the way of presentation.
3.2 Psychological Aspects
The combination of the different representation possibilities in only one system inspired the
multimedia euphoria at the end of the eighties. Especially in the case of learning systems, the
expectations were particularly high. Illustration 4 shows the naive expectations about the
effect of sense modalities and learning activities on memory performance. This representation
is very popular in many publications, however a source founded scientifically has not been
found.
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
read
hear
see
hear and see
retell
do
Illustration 4: Naive assumptions about the effect of sense modalities and learning activities
on memory performance27
Such expectations are based on a summation hypothesis. According to that the memory
performance of hearing and seeing is the sum of the two channels (20% + 30% = 50%), very
according to the motto: “A lot helps a lot!” For this totalization hypothesis, two theories are
stated as an argumentation: the dual coding theory of Paivio28 and the theory of hemisphere
specialization. Both theories start from the assumption that information depending on
codification is processed by different cognitive systems. However, the summation hypothesis
was not confirmed by empirical works but falsified insofar that even further factors like preknowledge,
order, content and so forth play a decisive role at memory performance.29
However, the different ways of information representation advises the suspicion that they are
suitable for different purposes. For this purpose, a series of experiments were undertaken, to
classify them according to one or several features and to assign them to didactic functions. In
this case, the information representations were designated as medias. So a number of media
taxonomies have been developed with different feature categories, those were based on
general pedagogic knowledge about direct and indirect and/or media mediated experience
processes. Most of the ideas represented today for the use of multimedia information
representations are based on origins from the postwar years.
Dale in his work about audio-visual education methods, which appeared for the first time in
1946, proposed to divide the process of knowledge acquisition according to its concrete
steps.30 As one of the first media taxonomies the media-rating table of Gagné from 1965 is
27 from Weidenmann, 1995a, S. 68
28 Paivio, 1971
29 see the overview in Weidenmann, 1995a, p. 69 ff oder Hasebrook, 1995, p. 176 ff
30 Dale, 1946
known, where a selection of medias has been rated concerning their suitability for adoption of teaching functions.31 The Gagné model is tested empirically comparably well, however, it already shows strong indefiniteness within individual information types. This is an indication that not only the information presentation is important for remembering and the thesis of “A lot helps a lot” is not tenable here too.
A lot of further media taxonomies have been developed32 however they have been proven theoretical as well as practical as unsatisfactory at all. The classifications were too general both with respect to medias and to teaching functions and learning aims in order to provide a psychologic-didactical frame theory for the use of medias.33 They concentrated on the didactic functions but neglected the learning processes. Moreover other than didactical factors like costs, organization, time requirement and so forth have not been included in accordance with their real influence.
The Aptitude-Treatment-Interaction-approach34 (ATI) attempts to include the learning processes and the personal structure of the learner. The assumption here is, that the learning in general and learning effects in particular are the result of reciprocal actions between instructional measures (treatment), including the use of medias, and individual features of the individual learner (aptitudes/ traits). But the ATI research only brought little findings – concerning the media aspect – which can claim certain universality.35
If the individual structure of the learner is decisive for the memory performance it appears only natural that each person experiences the information representations different and therefore no universal statements can be made. With that, the way of information representation becomes secondary and the instructional method gains importance.36 The basic setting of the application, the fixing of the user role, of the tasks and of the situation is primarily decisive. Only after these points are determined, the information presentation is important. How can the information representation be chosen correctly for the instruction method if different persons are to be assigned to very different learning types? It would be the logical consequence of this to represent every content in all information types and to let the User choose which representation form corresponds best to him. However, this consequence is more a fear scenario for every producer and designer of a multimedia application, because in this way, a harmonized Screendesign can be maintained neither or a cost framework can be kept. Only User classes can still be defined maybe and some few sections can be presented in accordance with the User classes. But the expenditure is already enormous too here because the production of multimedia applications is very high-cost.
If there is no unambiguous generalizeable correlation between memory performance, personality structure or pre-knowledge of the learner and the information types it can now be presumed that it is just in reverse: The individual information types themselves have very specific strengths and weaknesses. If they are combined reasonably, they can support mutually themselves either or else destroy the effect. The strengths and weaknesses of the information types then encounter the individual strengths and weaknesses of the User (“learning type”) and can, depending on the constellation (strength encounters weakness, or vice versa), cause positive or negative effects. What can the strengths of the individual information types be?
31 Gagné, 1965
32 e.g. Hasebrook, 1995, p. 186 or Marmolin, 1992
33 Issing, 1988, p. 536
34 see the good representation of this concept and its representatives in Issing, 1988, S. 537 f
35 see Issing, 1988, p. 537
36 Weidenmann, 1995a, p. 78
Kracauer supports an idea to this with his concept of the “aesthetically principle”: the performance within a certain medium is artistic all the more satisfactory, the more it comes from the specific qualities of this medium.37 Accordingly, it is possible that there are specific qualities for each information type, by means of them its use in every case in every application can always be decided again. In the following list are particular qualities of the information types pointed out. Text Specific quality of the information type text is the individually settable studying rate. Every reader has his own rhythm. He can read sentences again, think the formulation over and so forth. Text is a redundant language. Through this redundancy, it is possible that the contents can be understood flexibly.38 It is suitable as hollow studying. In multimedia applications, a longer reading of text is made more difficult by the small screen resolution. Consequently, text appertains better into printed formats, e.g. into a book in which one can become immersed. Text in multimedia applications has more decorative and symbolic character for statements or denotations in graphics. Speech Speech is continuous. Even if the spoken sentence can become repeated, no thorough studying is yet possible. Speech can introduce, give surveys, stimulate, and tell. Since the speech between people requires a dialog and the computer is (still) not capable of a natural-language dialog, it can only be used in such systems as additional information. Particularly there, where the visual system would be superloaded by additional text reading. Speech is ideal as an explanation of animation if it is synchronized with the events on the screen. Speech as a voice melody (e.g. as a rhyme) can cause strong memory effects. Score The score has its strength at the composing or the analyzing from music and/or sound. It is the text of the melody and is suitable for those who reproduce or want to study it, therefore in widest sense for musicians. Ideal is the combination with sound if in this case the sounds played currently are emphasized or characterized visually. Sound The sound as music or noise it is able to wake emotions. Music can stimulate moods or cause relaxation states. Sound can become the sign of recognition, the leitmotiv or through combination with interaction types develop a “feeling” for the corresponding actions (e.g. audible button clicking). The combination of sound with animation, movie, object and world to produce realistic effects is ideal. Graphics Strength of graphics is to represent a context if this can not be preserved in reality or if it is too complex to recognize it. Graphics are discreet. The User himself determines viewing moment and duration. In this way, graphics are very suitable for the individual studying and analyzing of connections. The combination with text is good since both are discreet representations and if the text refers exactly to the graphics. Graphics approve more interpretations than the image and can be used better for the support of mental models. Animation The animation has its strength in representation of temporal dependences, that in reality
37 Kracauer, 1985, p. 36
38 see the emotional comment for the text of Postman, 1988, p. 91 ff
either can not be preserved or are too complex to understand their connections. Compared to graphics the advantage is at connections where only movement explains the function. In such a way, the functional representation of a motor is for instance very suitable for an animation representation. The combination with speech and/ or sound is ideal. Synchronous speech helps to understand the representations and compared to additional text there is no change of the view of the user necessary. A further strength of animation is the decoration function. Through effects combined with sound interest and attention can be woken. Animation supports like graphics the formation of mental models. Image The image represents an integrated connection. The image is very much related by its photorealistic representation to the concrete contents. Unlike text, the contents are non-flexibly interpretable. So how the image shows a thing, it is. Image has the strength in the good situation function. Because of the reality proximity, recognition of the contents represented in the image is slightly possible. Moods can also be caused by images. In this case, the combination of image with sound is very effective. Images are just as well suitable for the extensive considering and the recognition of details as for joining of associations. Movie The movie shows temporal dependencies in realistic form. Its strength is the authenticity as fact movie or the waking of emotions in the feature film. Through the possibilities of fixing technique, temporal and spatial distances are deliverable without linguistic communication. Ideal is the combination with speech and sound to produce either more reality or stronger emotions. The memory effect and the situation function is very high here too but also with the effect of small flexibility: so how the movie shows it, it is and not different. Object The strength of the object lies in approving of many possibilities, in trying out, in combining elements. Complex units, which can only be experienced by a person oneself, are reasonable contents. The interactivity is high here. In this way, constructivistic paradigms can be put into action. All discreet information types can be available with their respective strengths and their weaknesses as an object. The combination with sound can still increase the empirical values if actions are coded auditory. World The strength of the world lies in the spatial visualization and the orientation. Strong emotional and playful references can be built up by tridimensional representations. Impressive scenarios, which can lace the User emotionally, can be made by combination with objects and all other information types. However, there is mostly no exclusive information representation through nonlinear techniques. Through hyperlinks can (e.g. an image) information additionally be covered with text, an animation be called or graphics for an explanation be shown. The effort and the combination of information types are dependent from many factors. Some are pointed out, many ones still lie in the field of the future research. The always-new combination or performance of information types always let new possibilities appear. However, a specific mix can often become a “trademark” of a producer and information types can be used “alien” by violation of the aesthetic principle yet achieve reasonable effects.
4 Development of multimedia systems
The characteristic feature at multimedia developments is interdisciplinary. To meet the bandwidth of all fields a comprehensive approach is necessary. A purposeful planning can develop multimedia systems, which do not only have end in it but also see the place of the program in the entire environment.39 The environment in that the system should be used and the reference framework must be considered too. The conception is to be checked depending on application and is to be created again. However, heart of planning is the multimedia Storyboard. Scenes and action possibilities of the User are recorded in this document. The development of the Storyboard is a process in which a lot of creativity and media didactic knowledge is necessary. The Storyboard is then the basis for the programming and the production of the individual information types (movie, sound, speech, image, text and so forth). At the beginning of the work on the storyboard Users should be involved continuously and their suggestions from the accompanied evaluations on prototypes should be included. A multimedia director coordinates the harmonization of the information types with the didactic setting. Multimedia applications show many similarities with film and television. In such a way, editing guidelines are also here necessary for the preparation and above all maintenance and/or updating of the contents. In the Storyboard educational and psychological aspects of multimedia manifest itself. Interaction types and information types are combined together in an evolutionary, creative process. In the Storyboard appear the environment, the frame story as well as the individual information representations and its triggers.
Basic principles can be used for the design of the Storyboard. These basic principles presented here are abstractions worked out by me inductively both from own experiences during the development of multimedia systems and of different sources of literature which more or less explicitly refer to these principles. However, these basic principles are in the final analysis heuristic theses. In Table 3, the basic principles described in the next two sections are stated summarizingly.
Basic design principles
Basic technical principles Multiple codification Interaction transparency Metaphor consistence Interaction consistence/ functional coherence Expectation mapping Active orientation support Fewer is more Transparency Interaction minimization User control Acceptable quality of information representations System feedback at acceptable response times
Table 3: Overview of the basic principles of multimedia applications
The basic design and technical principles are comparable with necessary conditions, which are important for acceptance of a multimedia application. However, they are in no sufficing conditions for success of the application. In this way, a comparison can be set up with the two-factor-theory from Herzberg: the basic principles are a kind of hygiene factor, which influence only a dissatisfaction scale. If they are sufficiently fulfilled, only a small
39 for a more detailed description of system planning of multimedia systems see Jarz, E.: Systemplanung multimedialer System.- Wiesbaden: Gabler, 1997
dissatisfaction results, if they are disregarded, dissatisfaction increases. Completely independent of the dissatisfaction scale is the satisfaction scale, which can be seen as motivation factor. The entire contents-related and didactical setting influences the motivation factor.
4.1 Basic design principles
The basic design principles40 are already needed at the first Screendesign. They are to be understood in such a way that there are exceptions to the basic principle which can lead consciously to specific, desired – even often very creative – effects. Multiple codification The simultaneous responding of several input channels can increase the memory effect of information representations. Essential elements should be coded multiple. A multiple codification can be achieved Within the information type Different content fields (information connections), which are represented with the same information type, are coded differently by varying of general design parameters as e.g. color or element order. So the color of the background in an information connection for instance can be red and in another one green. By different information types In accordance with the dual coding theory of Paivio41 and the learning model of Vester42, it is reasonable to offer the identical information contents by a combination of different codification (information types). The choice of the right combination of the information types should occur in accordance with their strengths and weaknesses. Interaction transparency The basic principle of interaction transparency means that the User can realize at any time, both the interaction facilities which are available in a scene and their effects after the interaction. An example to this is pressing a virtual button with the mouse. If the User moves the mouse over the button, he causes the event “entry” and instead of the mouse pointer a symbolized hand with stretched forefinger appears. In such a way, it is announced for the User that an activatable information chunk is available here. If the User has pressed the button, as a result he immediately sees the effect of his interaction e.g. that the button is represented inverted, which looks like “pressed”. Metaphor consistence The basic principle of metaphor consistence says that the stylistic devices, which are used in the application, should match the cognitive models from the everyday life world of the Users. The literary terms, which emboss the connotative importance of the metonymy and of synekdoche, are fundamental for metaphor formation. Metonymy is a turn of speech in which an associated detail is used in order to represent an idea or a notion. Therefore maybe the crown is a sign for a kingship. Synektoche is a turn of speech, where a part stands for the whole or the whole stands for a part. Therefore, e.g. the car for the engine or the wheel for the car.43 If a virtual button is pressed for example in an application, a reasonable metaphor consistence is a sound,
40 see here the first details in Lopuck, 1996, p. 52
41 Paivio, 1971
42 Vester, 1980
43 Monaco, 1980, p. 149
which simulates the clicking of a physical key. As a result, the metaphor becomes consistent. The User has the impression really to have pressed a key. The visual codification is above all essential for metaphor consistence. The graphic symbols must be “guessable”. However it is different from culture to culture what a correct metaphor for Users is. Interaction consistence/ functional coherence Same functions should always be presented within an application at the same place and in the same information representation. The other way round, same information representations (e.g. a symbol) must also always make the same function available. If sound elements are used for example in order to mark those fields which are “active” ones, where planned reactions occur, so a non active place should occur in the entire application, that is not marked with a sound. The basic principle of interaction consistence/ functional coherence can also be applied to the consistence of function calls between applications. So are for example Users of graphic operating systems of the company Microsoft used to get a help-function by pressing the F1-key. Accordingly, in multimedia applications, which run on these operating systems, pressing the F1-key should also activate the help-function. At applications, which run on different platforms, should be considered that User groups of every operating system of different manufacturers form an own culture and expect different functions. So an application is ended for instance in Windows-environments with the key combination “Alt-F4”. In applications on the operating system of Apple, this function is achieved by the key combination “Command-Q”. Expectation mapping The basic principle of expectation mapping says that Users have a certain notion or expectation of an application. Expectation mapping is then successful if the expectations are met or outdone. The basic principle of expectation mapping results in the multimedia applications achieving a higher and higher standard. What the User expects depends on what he has already seen. In this way, the evolution spiral and the life-cycle model also becomes clear here: Information types like 2 1/2 D-objects and worlds, which are to be found still rather rare, will soon raise to standard and will be in the not too distant future almost a necessity. Active orientation support While navigating in complex systems continuous information representations should be provided, which indicate the User at any time where he is and offer him simultaneously the possibility to jump to neighboring or preceded hierarchically elements. Fewer is more The basic principle of “fewer is more” means the reduction of the functions to those important elements, which a User needs in a scene at the same time. Overcrowded and overloaded screens confuse more than they are useful. This basic principle is comparable with the concept of the KISS (keep it simple and short). Transparency Transparency means that a good interface design remains unnoticed. If users must invest a lot of time in order to find out the possibilities of the interface, they are distracted from the contents. Transparency means that the system works to a large extent also in a self-explanatory form and makes help functions actually unnecessary. Interaction minimization Not more than three interactions (e.g. mouse clicks) should be required for attaining
important, often required functions.44 Hierarchies interleaved too deep often offend against this basic principle if they allow no direct possibilities to jump from e.g. layer 3 in part 2 to layer 4 in part 1. So 5 communications would be necessary in this example.
4.2 Basic Technical Principles
The following basic technical principles can be derived for the Storyboard as guidelines for the development: User control The User must be able to attain the control of the system at any time. In particular in the case of continuous information types, the User must be able to control and/or break off this linear process. Acceptable quality of information representations The technical perfection of the information types has an essential role to play in the case of User acceptance. Especially continuous information types do not have that quality in standard PC systems, that people are used to from film and television. Quality depends here strongly on the expectations of the Users too. In 1990 an all-digital information representation as movie on traditional PC-systems was a sensation, even with acceptance of the stamp-size format and the not very synchronous sequence. Meanwhile the expectations of the Users increased, so not fluently movies are felt as not up-to-date. System feedback at acceptable response times The system must supply a feedback at acceptable response time to the User to his interactions. If the response times are surpassed, the system response for the User is no more clearly allocateable to his interaction. An interference of the interaction process occurs. The attention of the User is diverted and the train of thought is disturbed.
4.3 Cultural influences
The development of the Storyboard is embossed by creative, personal elements. Every developer has a particular cultural background, which is embossed, both from the geographical culture area and from the social background. Supplementary influences still result from the respective specific organization culture. It also depends whether or not the User has the same complex cultural background as the developer. If the cultures correspond themselves to a large extent in their state of mind, their values, their norms, their rites and voice habits, problems of acceptance of a multimedia application will be minimized. If the cultural background of developer (team) and User are only low congruent, a lack of communication occurs. The possibilities of these difficulties are represented plastically in Watzlawick’s concept of the trisection of confusion-desinformation-communication.45 During the development of multimedia applications, it is important to pay attention to this cultural difference. The analysis of the User role and of the reference framework in the process of the Storyboarding is used for this purpose. The gained insights into cultural special features of the User role have effects in the following essential fields: Screen Layout and Screendesign. Both elements are dependent essentially on the writing and reading habits.
44 Lopuck, 1996, p. 52
45 Watzlawick, 1995
Effect of Colors. Colors have very particular meanings in many culture areas, which must in the final analysis be synchronized semantically with the contents. Effect of Language. Every culture area has a particular language with particular idiomatic expression (dialect) that identifies speakers of other culture areas as “foreigners” and therefore the corresponding defense attitudes or total communication barriers can be cause. Effect of Graphics. The composition of graphics for decorative functions is embossed essentially by the peculiarity of the cultural background. In particular setup and style of a picture let inferences to the cultural origin. Recognition of Symbols. Symbols are very specific cultural. Even traffic signs are different from country to country. Symbols are deeply linked to a culture and therefore, their correct interpretation is significantly dependent on the cultural background. Symbols with different meaning can lead to forming a front very fast, because they are mostly engaged emotionally. Since very many symbols are used in multimedia applications (e.g. icons), the attention to differences at the symbol display is already important in the draft. Metaphor Consistence. Users have specific schemes according with their cultural area. The essential metaphors of an application must be adapted to these schemes in order to maintain the basic principle of metaphor consistence. Interaction Consistence/ Functional Coherence Accustomed interaction peculiarities exist not only within a geographical but also within an operation system-specific culture area. Users who swear on “their” operating system also expect accustomed functions with usual control.
Consequently, the development of an application is not simply translatable from one culture area into another one. A check should always at least occur, whether and which cultural differences exist. The effort to a larger market and therefore the export into other cultural areas is to be taken as a legitimate objective of efficiency since the development of multimedia systems is at high-cost. Under this diction a cultural adaptation is mostly renounced. However, this shortcoming is relocated through the possibilities and the effects from international mass media. In the sector of advertising, advertisements, which have to consider cultural peculiarities, are therefore no more developed country-specifically (with the exception of language), but more according to customer segments which don’t differ from each other independent from the country.46 The User becomes a cosmopolite, culture does not grow in a country but almost always becomes defaulted by a companies philosophies. The market strategy of Coca Coke is the best example for it. However, cultural influences can not yet be negated and are to be considered in the design or adaptation stage of a good multimedia learning or mass information system.
5 Further perspectives.
The development of multimedia applications require a very broad and highly qualified knowledge. However, systems, which really guarantee a more efficient knowledge transfer
46 Schweiger; Schrattenecker, 1992, p. 174ff
process, can be developed in this way.47 Technology is the main problem no more. The contents-related design is the challenge in order to offer support for the information transfer that must be used reasonably in a framework of action so that the personal information requirement can hold step with the increase of knowledge in the society. For this purpose, the concepts described in this paper can show the variants of the representation and interaction forms and allow its reasonable combination in a storyboard according to the basic technical and design principles.
6 Reference
Ambron; Hooper, 1990
Ambron, S.; Hooper, K. (Hrsg.): Learning with interactive multimedia: Developing and using multimedia tools in education.- Redmond, WA: Microsoft Press, 1990
Aukstakalnis; Blater, 1994
Aukstakalnis, S.; Blatner, D.: Cyberspace: Die Entdeckung künstlicher Welten.- Köln: vgs, 1994
Babu, 1995
Babu, G. P.; Mehtr, B. M.; Kankanhalli, M. S.: Color Indexing for Efficient Image Retrieval. In: Multimedia Tools and Applications, Volume 1, #4, 1995
Bauer, 1993
Bauer, W.; Riedel, O.: VR in Space- and Building-planning. In: Brody, F.; Morawetz, R. F. (Hrsg.): Virtual Reality Vienna 1993.- Wien: 1993, S. 34 f
Blattner; Dannenberg, 1992
Blattner, M.M.; Dannenberg, R.B. (Hrsg.): Multimedia Interface Design.- Wokingham u.a.: Addison-Wesley, 1992
Börner; Schnellhardt, 1992
Börner, W.; Schnellhardt, G.: Multimedia.- München: te-wi Verl., 1992
Brauner; Bickmann, 1994
Brauner, J.; Bickmann, R.: Die multimediale Gesellschaft.- Frankfurt am Main: Campus, 1994
Cakmakov, 1995
Cakmakov, D.; Davcev, D.: Information Retrieval and Filtering of Multimedia Mineral Data. In: Multimedia Tools and Applications, Volume 1, #4, 1995
CardEtal, 1992
Card, S. K.; Mackinlay, J. D.; Robertson, G.: The Design Space of Input Devices. In: Blattner, M.M.; Dannenberg, R.B. (Hrsg.): Multimedia Interface Design.- Wokingham u.a.: Addison-Wesley, 1992
Charwat, 1994
Charwat, H.-J.: Lexikon der Mensch-Maschine-Kommunikation, 2. Auflage.- München; Wien: Oldenbourg, 1994
Dale, 1946
Dale, E.: Audiovisual methods in teaching.- New York: Holt, Rinehart & Winston, 1946
Förster; Zwerneman, 1993
Förster, H.-P.; Zwernemann, M.: Multimedia – Die Evolution der Sinne!.- Neuwied; Kriftel; Berlin: Luchterhand, 1993
Frater; Paulißen, 1994
Frater, H.; Paulißen, D.: Das große Buch zu Multimedia.- Düsseldorf: Data Becker, 1994
Gagné, 1965
Gagné, R. M.: The conditions of learning.- New York: Holt, Rinehart & Winston, 1965
Gagné, 1987
Gagné, R. M. (Hrsg.): Instructional technology: foundations.- Hillsdale, N. J.: Erlbaum, 1987
Gertler, 1995
Gertler, N.: Multimedia Illustriert.- Haar bei München: Markt und Technik, 1995
Haak; Issing, 1992
Haak, J.; Issing, L.: Multimedia-Didaktik – State of the art. In: Dette, K./ Haupt, D./ Polze, C. (Hrsg.): Multimedia, und Computeranwendungen in der Lehre; Das Computer-Investitions-Programm (CIP) in der Nutzanwendung. Mikrocomputer-Forum für Bildung und Wissenschaft 5.- Heidelberg: Springer, 1992
Hasebrook, 1995
Hasebrook, J.: Multimedia- Psychologie: eine neue Perspektive menschlicher Kommunikation.- Heidelberg; Berlin; Oxford: Spektrum, 1995
Heinrich; Lehner; Roithmayr, 1994
Heinrich, L. J.; Lehner, F.; Roithmayr, F.: Informations- und Kommunikationstechnik für Betriebswirte und Wirtschaftsinformatiker, 4. Auflage.- München; Wien: Oldenbourg, 1994
Heinrich; Roithmayr,
Heinrich, L. J./ Roithmayr, F.: Wirtschaftsinformatik-Lexikon, 5. Auflage, München-
47 see here Jarz, 1997a
1995
Wien 1995
Hickethier; Zielsinski, 1991
Hickethier, K; Zielsinski, S.: Medien/ Kultur. Schnittstellen zwischen Medienwissenschaft, Medienpraxis und gesellschaftlicher Kommunikation. -Berlin: Wissenschaftsverlag Volker Spiess Berlin, 1991
Hoogeveen, 1995
Hoogeveen, M.: Towards a new multimedia paradigm: is multimedia assisted instruction really effective?. In: Maurer, H. (Hrsg.): Educational Multimedia and Hypermedia.- Graz, 1995
ISO/ IEC, 1993
ISO/ IEC JTC1/ SC29/ WG12: Information Technology: Coded Representation of Multimedia and Hypermedia Objects (MHEG).- ISO Commitee Draft: ISO/ IEC CD 13522-1, 1993
Issing, 1988
Issing, L.J.: Wissensvermittlung mit Medien. In: Mandl, H.; Spada, H. (Hrsg.): Wissenspsychologie.- München, Weinheim: Psychologie- Verlags- Union, 1988
Issing, 1995
Issing, L. J.: Instruktionsdesign für Multimedia. In: Issing, L. J.; Klimsa, P. (Hrsg.): Information und Lernen mit Multimedia.- Weinheim: Psychologie-Verl.-Union, 1995
Issing; Klimsa, 1995
Issing, L. J.; Klimsa, P. (Hrsg.): Information und Lernen mit Multimedia.- Weinheim: Psychologie-Verl.-Union, 1995
Jarz, 1997a
Jarz, E.M.; Kainz, G.A.; Walpoth, G.: Multimedia-Based Case Studies in Education: Design, Development, and Evaluation of Multimedia-Based Case Studies
Jarz, 1997b
Jarz, E.M.: Entwicklung multimedialer Systeme.- Wiebaden: Gabler, 1997
Jarz; Kainz; Walpoth, 1995
Jarz, E.; Kainz, G. A.; Walpoth, G.: The design and development of multimedia-based case studies. In: Maurer, H.: Educational Multimedia and Hypermedia.- Graz: 1995
Jarz; Kainz; Walpoth, 1996
Jarz, E.; Kainz, G. A.; Walpoth, G.: Multimedia based Case studies in Education. Boston: Proceedings of the ED-Media 1996.- Boston: 1996
Kerres, 1990
Kerres, M.: Entwicklung und Einsatz computergestützter Lernmedien; Aspekte des Software-Engineerings multimedialer Teachware), in: Wirtschaftsinformatik, 32. Jahrgang, Heft 1, Februar 1990
Klimsa, 1995
Klimsa, P.: Multimedia aus psychologischer und didaktischer Sicht. In: Issing, L. J.; Klimsa, P. (Hrsg.): Information und Lernen mit Multimedia.- Weinheim: Psychologie-Verl.-Union, 1995
Klingberg, 1993
Klingberg, K. D. (Hrsg.): ABC der Multimedia-Technologie.- Bergheim: Multikom-Verl., 1993
Kracauer, 1985
Kracauer, S.: Theorie des Films: Die Errettung der äußeren Wirklichkeit.- Frankfurt am Main: Suhrkamp, 1985
Laurel, 1986
Laurel, B.: Interface as Mimesis. In: Norman, D. A.; Draper, S.: User Centered Sys-tem Design: New Perspectives on Human-Computer Interaction.- Hillsdale, NJ: Lawrence Erlbaum, 1986
Laurel, 1993
Laurel, B.: Computers as Theatre.- New York u.a.: Addison-Wesley, 1993
Laurel, 1994a
Laurel, B.: New Directions. In: Laurel, B. (Hrsg.): The art of human-computer interface design.- Mountford, USA: Addison-Wesley, 1994
Laurel, 1994b
Laurel, B. (Hrsg.): The art of human-computer interface design, 8. printing.- Mountford, USA: Addison-Wesley, 1994
Loeffler, 1993
Loeffler, C. E.: Virtual Polis. In: Brody, F.; Morawetz, R. F. (Hrsg.): Virtual Reality Vienna 1993.- Wien: 1993, S. 6 f
Lopuck, 1996
Lopuck, L.: Designing Multimedia.- Berkeley, CA: Peachpit Press, 1996
Marmolin, 1992
Marmolin, H.: Multimedia from the Perspectives of Psychology. In: Kjelldahl, L. (Hrsg): Multimedia: Systems, Interaction and Applications.- Stockholm: Springer, 1992
Maschke, 1995
Maschke, T.: Faszination der Schwarzweiß-Fotografie; 4. Auflage.- Augsburg: Augustus, 1995
Mathis, 1994
Mathis, T.: Evaluierung von Multimediaanwendungen.- Innsbruck: 1994 – Diplomarbeit
Maurer, 1995
Maurer, H. (Hrsg.): Educational Multimedia and Hypermedia; Proceedings of ED-Media 95 – World Conference on Educational Multimedia and Hypermedia.- Graz: AACE, 1995
Messina, 1993
Messina, C.: Was ist Multimedia? Eine allgemeinverständliche Einführung.- München; Wien: 1993
Meyer-Wegener, 1991
Meyer-Wegener, K.: Multimedia-Datenbanken.- Stuttgart: Teubner, 1991
Mikunda, 1986
Mikunda, C.: Kino spüren.- München: Filmland Presse, 1986
Monaco, 1980
Monaco, J.: Film verstehen, 2. Auflage.- Hamburg: rororo, 1980
Müller; Merbach, 1992
Müller-Merbach, H.: Perspektiven einer informationsorientierten Betriebswirtschaftslehre. In: Konegen-Grenier, C., Schlaffke, W. (Hrsg.):Praxisbezug und soziale Kompetenz, Kölner Texte & Thesen, S. 375 ff
Paivio, 1971
Paivio, A.: Imagery and Verbal Processes.- New York: 1971
Phillips, 1992
Phillips, R. L.: Opportunities for Multimedia in Education. In: Interactive Learning Through Visualization: The Impact of Computer Graphics in Education.- Berlin; Heidelberg; New York: Springer 1992
Postman, 1988
Postman, N.: Wir amüsieren uns zu Tode: Urteilsbildung im Zeitalter der Unterhaltungsindustrie.- Frankfurt am Main: Fischer TB, 1988
Preece et al., 1994
Preece, J. et al.: Human-computer interaction.- Loughborough: Addison-Wesley, 1994
Schweiger; Schrattenecker, 1992
Schweiger, G.; Schrattenecker, G.: Werbung: Eine Einführung; 3. Auflage.- Stuttgart: Fischer, 1992
Shannon, 1949
Shannon, C.; Weaver, W.: The mathematical theories of communication.- Illinois: University of Illinois Press, 1949
Shneiderman, 1992
Shneiderman, B.: Designing the User Interface: Strategies for Effective Human-Computer Interaction, 2nd edition.- Reading, MA: Addison-Wesley, 1992
Skinner, 1953
Skinner, B. F.: Science and Human Behavior.- New York: Free Press, 1953
Steinmetz, 1993
Steinmetz, R: Multimedia-Technologie.- Berlin; Heidelberg; New York: Springer, 1993
Titze, 1993
Titze, H.: Das philosophische Gesamtwerk; Band 4: Theorie der Information.- Berlin, 1993
Vester, 1980
Vester, F.: Denken, Lernen, Vergessen, 5. Auflage.- München: DTV, 1980
Watzlawick, 1995
Watzlawick, P.: Wie wirklich ist die Wirklichkeit? Wahn, Täuschung, Verstehen; 20. Auflage.- München: Piper, 1995
Weidenmann, 1995a
Weidenmann, B.: Multicodierung und Multimodalität im Lernprozess. In: Issing, L. J.; Klimsa, P. (Hrsg.): Information und Lernen mit Multimedia.- Weinheim: Psychologie-Verl.-Union, 1995
Weizenbaum, 1990
Weizenbaum, J.: Die Macht der Computer und die Ohnmacht der Vernunft, 8. Auflage.- Frankfurt am Main: Suhrkamp, 1990
Wenzel, 1992
Wenzel, E. M.: Three-Dimensional Virtual Acoustic Displays. In: Blattner, M.M.; Dannenberg, R.B. (Hrsg.): Multimedia Interface Design.- Wokingham u.a.: Addison-Wesley, 1992
Wodaski, 1995
Wodaski, R.: Multimedia für Insider.- Haar bei München: Markt und Technik, 1995
Wratil; Schwampe, 1992
Wratil, P.; Schwampe, D.: Multimedia für Videa und PC; Techniken und Einsatzmöglichkeiten.- Haar bei München: Markt & Technik, 1992
Yazdani; Pollard, 1993
Yazdani, M.; Pollard, D.: Multilingual aspects of a multimedia database of learning materials. In: Yazdani, M. (Hrsg.): Multilingual-Multimedia.- Wiltshire Cromwell Press, 1993

• Definisi Multimedia
o Multi [latin] 􀃆 banyak; bermacam-macam
o Medium [latin] 􀃆 sesuatu yang dipakai untuk menyampaikan atau membawa sesuatu;
o Medium [Amarican Heritage Electronic Dictionary, 1991] 􀃆 alat dan cara untuk mendistribusikan dan mempresentasikan informasi
• Multimedia : penggunaan komputer untuk menampilkan dan mengkombinasikan text, graphics, audio, video dan animasi dengan menggunakan links dan tools yang memungkinkan pemakai untuk melakukan navigasi, berinteraksi, membuat, dan berkomunikasi
• Beberapa definisi multimedia :
o Kombinasi dari komputer dan video (Rosch, 1996)
o Kombinasi dari tiga elemen: suara, gambar, dan teks (McComick, 1996)
o Kombinasi dari paling sedikit dua media input atau output. Media ini dapat berupa audio (suara, musik), animasi, video, teks, grafik dan gambar (Turban dan kawan-kawan, 2002)
o Alat yang dapat menciptakan presentasi yang dinamis dan interaktif yang mengkombinasikan teks, grafik, animasi, audio dan video (Robin dan Linda, 2001)
o Multimedia dalam konteks komputer menurut Hofstetter 2001 adalah: pemanfaatan komputer untuk membuat dan menggabungkan teks, grafik, audio, video, dengan menggunakan tool yang memungkinkan pemakai berinteraksi, berkreasi, dan berkomunikasi.
o Multimedia is the use of several different media to convey information (text, audio, graphics, animation, video, and interactivity). Multimedia also refers to computer data storage devices, especially those used to store multimediantent (wikipedia.org).
1
DEFINISI KOMPUTER MULTIMEDIA (MPC)
Menurut wikipedia.org:
• Komputer Multimedia adalah sebuah komputer yang dikonfigurasi sesuai dengan rekomendasi dan memiliki sebuah CD-ROM. Standarisasi komputer multimedia dilakukan oleh “Multimedia PC Marketing Council”, sebuah kelompok kerja dari perusahaan yang dahulu bernama Software Publishers Association (sekarang bernama Software and Information Industry Association). Perusahaan ini merupakan gabungan dari Microsoft, Creative Labs, Dell, Gateway, dan Fujitsu.
Kenapa CD-ROM?
• Karena dahulu multimedia sebatas hanya kemampuan komputer untuk menampilkan video melalui sebuah CD-ROM saja.
Standar Komputer Multimedia menurut Software and Information Industry Association:
Pada tahun 1990 (Level 1):
– 16 MHz 386SX CPU
– 2MB RAM
– 30MB hard disk
– 256-color, 640 x 480 VGA video card
– 1x CD-ROM drive using no more than 40% of CPU to read, with < 1 second seek time
– Sound card outputting 22 kHz, 8-bit sound; and inputting 11 kHz, 8-bit sound
– Windows 3.0 with Multimedia Extensions.
Pada tahun 1993 (Level 2):
– 25 MHz 486SX CPU
– 4 MB RAM
– 160 MB hard disk
– 16-bit color, 640×480 VGA video card
– 2X CD-ROM drive using no more than 40% of CPU to read at 1x, with < 400ms seek time
– Sound card outputting 44 kHz, 16-bit sound
– Windows 3.0 with Multimedia Extensions, or Windows 3.1
Pada tahun 1996 (Level 3):
– 75 MHz Pentium CPU
– 8 MB RAM
– 540 MB hard disk
– Video system that can show 352×240 at 30 frames per second, 15- bit color
– MPEG-1 hardware or software video playback
2
– 4x CD-ROM drive using no more than 40% of CPU to read, with < 250ms seek time
– Sound card outputting 44 kHz, 16-bit sound
– Windows 3.11
Pada tahun 200x ???
KOMPONEN MULTIMEDIA
• Cara mengkomunikasi informasi :
o Modalities (cara) : penglihatan, pendengaran, sentuhan
o Saluran komunikasi : percakapan, sound effects, music
o Medium : animasi + suara, gambar + teks
• 4 komponen utama multimedia :
1. Komputer, untuk melakukan koordinasi tentang apa yang dilihat dan didengar oleh pemakai
2. Links, yang menghubungkan dengan informasi
3. Navigational tools, yang memungkinkan pemakai untuk menjelajahi informasi yang ditampilkan
4. Cara, untuk berbagi, memproses, dan mengkomunikasikan informasi dan ide pemakai
• Bila salahsatu komponen tidak ada 􀃆 disebut mixed media bukan multimedia
Mengapa multimedia penting ?
1. merupakan pemicu (triggers) 􀃆 pembaca memperoleh sesuatu yang ‘lebih’ dibandingkan topik yang dipelajari
2. Sangat efektif dalam penyampaian informasi; menurut Computer Technology Research (CTR) :
􀂃 Orang mampu mengingat 20% dari yang dilihat
􀂃 Orang mampu mengingat 30% yang didengar
􀂃 Orang mengingat 50% dari apa yang didengar, dilihat dan dilakukan
• Pemanfaatan multimedia :
1. Pendidikan 􀃆 tutorial, ensiklopedia (misal : microsoft encarta), instruksional)
2. Informasi 􀃆 pariwisata, museum, galeri seni
3. Hiburan 􀃆 games, seni, pertunjukan
3
4. Kedokteran 􀃆 x-ray scanner
5. Periklanan 􀃆 iklan televisi, bandara, kiosk, dll
• Multimedia mampu :
1. Mengubah mengubah tempat kerja. Dengan adanya teleworking, para pekerja dapat melakukan pekerjaanya tidak harus dari kantor. Contoh software yang mendukung teleworking/telecommuting: Netmeeting!
2. Mengubah cara belanja. Homeshopping/teleshopping dapat dilakukan dengan menggunakan internet, kemudian barang datang dengan sendirinya.
3. Mengubah cara bisnis. Nokia membuat bisnis telepon seluler, banyak perusahaan menggunakan sistem jual beli online, bank menggunakan cara online-banking.
4. Mengubah cara memperoleh informasi. Orang-orang mulai menggunakan internet dan berbagai software untuk mencari informasi. Misalnya: membaca koran online, detik.com, menggunakan software kesehatan, belajar gitar dari software dan masih banyak lagi.
5. Mengubah cara belajar. Sekolah mulai menggunakan komputer multimedia, belajar online, menggunakan e-book.
6. Internet Multimedia juga mulai bersaing dengan televisi dan radio.
• Keunggulan multimedia :
1. Menarik perhatian 􀃆 karena manusia memiliki keterbatasan daya ingat
2. Media alternatif dalam penyampaian pesan 􀃆 diperkuat dengan teks, suara, gambar, video, dan animasi
3. Meningkatkan kualitas penyampaian informasi
4. Interaktif
• Kelemahan multimedia :
1. Design yang buruk menyebabkan kebingungan dan kebosanan 􀃆 pesan tidak tersampaikan dengan baik
2. Kendala bagi orang dengan kemampuan terbatas / cacat / disable
4
3. Tuntutan terhadap spesifikasi komputer yang memadai
Media (berdasar ISO93a) diklasifikasikan menjadi beberapa kriteria:
1. Perception Medium
• Perception media membantu manusia untuk merasakan lingkungannya
• “Bagaimana manusia menerima informasi pada lingkungan komputer?” 􀃆 Persepsi informasi melalui penglihatan atau pendengaran
• Perbedaan persepsi informasi melalui “melihat” dan “mendengar”
• Aspek pada perception medium :
1. Aspek Representative Space: sesuatu yang terkandung dalam presentasi secara nyata
􀂃 Kertas, layar
􀂃 Slide show, power point
2. Aspek Representative Values: nilai-nilai yang terkandung dalam presentasi
􀂃 Self contained (interpretasi tiap orang berbeda), misal: suhu, rasa, bau
􀂃 Predefined symbol set (sudah disepakati sebelumnya), misal: teks, ucapan, gerak tubuh
3. Aspek Representation Dimension
􀂃 Ruang (space)
􀂃 Waktu (time) :
• time independent, discreet (text, grafis)
• time dependent , continuous media (video, audio, sinyak dari sensor yang berbeda)
2. Representation Medium
• Representation media ditentukan oleh representasi informasi oleh komputer
• “Bagaimana informasi pada komputer dikodekan?” 􀃆 Menggunakan berbagai format untuk merepresentasikan informasi. Contoh :
– Text : ASCII dan EBCDIC
– Grafis : CEPT atau CAPTAIN videotext
– Audio stream : PCM (Pulse Coding Method) dengan kuantisasi linier 16 bit
5
– Image : Facsimile (standard ISO) atau JPEG
– Audio/video : TV standard (PAL, SECAM, NTSC), computer standard (MPEG)
3. Presentation Medium
• Tool dan device yang digunakan untuk proses input dan output Informasi
• “Melalui media apa informasi disajikan oleh komputer, atau dimasukkan ke komputer?”
– Output : kertas, layar, speaker
– Input : keyboard, mouse, kamera, microphone
4. Storage Medium
• Pembawa data yang mempunyai kemampuan untuk menyimpan informasi (tidak terbatas pada komponen komputer)
• “Dimanakah informasi akan disimpan?” 􀃆 microfilm, floppy disk, hard disk, CD ROM, DVD, MMC, SDCard
5. Transmission Medium
• Pembawa informasi yang memungkinkan terjadinya transmisi data secara kontinyu (tidak termasuk media penyimpanan)
• “Melalui apa informasi akan ditransmisikan?” 􀃆 melalui jaringan, menggunakan kabel (coaxial, fiber optics), melalui udara terbuka (wireless)
6. Information Exchange Medium
• Pembawa informasi untuk transmisi, contoh : media penyimpanan dan media transmisi
• “Bagaimana informasi dari tempat yang berbeda saling dipertukarkan?” 􀃆 direct transmission dengan jaringan komputer, combined (storage dan transmission media), web yang berisi informasi, e-book, forum
6
SISTEM MULTIMEDIA
A multimedia system is any system which supports more than a single kind of media [AHD 1991].
Bagaimana sistem bisa disebut sebagai sistem multimedia?
1. Kombinasi Media
• Sistem disebut sistem multimedia jika kedua jenis media (continuous/discrete) dipakai. Contoh media diskrit : teks dan gambar, dan media kontinu adalah audio dan video.
2. Independence
• Aspek utama dari jenis media yang berbeda adalah keterkaitan antar media tersebut. Sistem disebut sistem multimedia jika tingkat ketergantungan/keterkaitan antar media tersebut rendah.
3. Computer-supported Integration
• Sistem harus dapat melakukan pemrosesan yang dikontrol oleh komputer. Sistem dapat diprogram oleh system programmer/ user.
Sistem Multimedia dapat dibagi menjadi:
1. Sistem Multimedia Stand Alone
Sistem ini berarti merupakan sistem komputer multimedia yang memiliki minimal storage (harddisk, CD-ROM/DVD-ROM/CD-RW/DVD-RW), alat input (keyboard, mouse, scanner, mic), dan output (speaker, monitor, LCD Proyektor), VGA dan Soundcard.
2. Sistem Multimedia Berbasis Jaringan
Sistem ini harus terhubung melalui jaringan yang mempunyai bandwidth yang besar. Perbedaannya adalah adanya sharing sistem dan pengaksesan terhadap sumber daya yang sama. Contoh: video converence dan video broadcast
Permasalahan: bila bandwidth kecil, maka akan terjadi kemacetan jaringan, delay dan masalah infrastruktur yang belum siap.
7
DATA STREAM
• Dalam sistem multimedia terdistribusi, data ditransmisikan (time dependent) dan terjadi pertukaran informasi
• Sistem digital 􀃆 Informasi dibagi menjadi beberapa unit (packets) 􀃆 Packet-packet dikirimkan
• Packet-packet diterima 􀃆 Packet-packet disusun ulang 􀃆 Informasi disajikan
Transmisi informasi dapat dikategorikan :
1. Berdasar mode transmisi
a. Aynschronous Trasmission Mode
– Komunikasi tanpa batas waktu. Packets mencapai penerima secepat mungkin
– Paket yang dikirm cepat karena tidak perlu adanya sinkronisasi
– Informasi untuk discrete media dapat ditransmisikan sebagai aynchronous data stream
– Contoh : transmisi e-mail
b. Synchronous Transmission Mode
– Terdapat batas waktu tunda maksimal untuk setiap packet dari suatu data stream
– Butuh sinkronisasi
– Penerima butuh buffer untuk menyimpan data sementara sambil menunggu paket lengkap
c. Isochronous Transmission Mode
– Terdapat batas waktu tunda maksimal dan minimal
– Melakukan garansi paket diterima dengan baik
– Client memberikan informasi kepada server tentang statusnya
– Membutuhkan buffer yang sangat besar
2. Berdasar periode streaming
a. Strongly Periodic Stream
– Interval waktu antara dua packet yang berurutan tetap
– Contoh : PCM coded
b. Weakly Periodic Stream
8
– Interval waktu antara dua packet yang berurutan dapat dideskripsikan dengan fungsi periodik
c. Aperiodic Stream
– Interval waktu tidak beraturan
3. Berdasar ukuran packet
a. Strongly Regular Stream
– Ukuran packet konstan
– Contoh : uncompressed audio/video stream
b. Weakly Regular Stream
– Ukuran packet data berubah secara periodik
– Contoh : MPEG
c. Irregular Data Stream
– Ukuran packet data tidak tentu
Distribusi multimedia :
1. Offline :
􀂃 Installation / kiosk 􀃆 dengan hardware tertentu
􀂃 CDROM / software download 􀃆 multiple hardware
2. Online :
9
􀂃 Komunikasi melalui jaringan komputer
􀂃 Dibutuhkan plug-in
􀂃 Dimungkinkan terjadi interaksi, feedback
Bentuk-bentuk presentasi multimedia :
1. Card-based / page-based
􀂃 Dirancang menyerupai halaman-halaman buku / majalah; dengan elemen-elemen teks, images, video, dan suara
􀂃 Terdapat links antar halaman 􀃆 hypermedia
􀂃 Contoh : HyperCard, ToolBook, WWW / HTML
2. Event-based
􀂃 Presentasi dikendalikan berdasar kejadian (event-driven), misalnya dengan menekan suatu tombol kemudian sistem akan merespon (actions)
􀂃 Diperlukan script dan authoring tools
3. Time-based
􀂃 Presentasi berjalan berdasar waktu yang telah ditentukan, semacam slide show
􀂃 Ciri khas : presentasi berjalan urut, paralel, dan sinkronisasi 10

Pemahaman Multimedia
Perbedaan cara pandang beberapa pengguna mengenai pengertian Multimedia
o  Vendor PC: Suatu PC  yang memiliki kapabilitas  suara, DVD-ROM dan memiliki microprocessor
multimedia superior.
o  Pelanggan:  TV  Kabel  interaktif  dengan menyedikan  ratusan  saluran  digital  atau  pelayanan  TV
Kabel melalui koneksi Internet kecepatan tinggi.
o  Ilmuwan  Komputer  (Pelajar):  Aplikasi  yang menggunakan  bermacam-macam  dukungan  seperti
teks, gambar, grafik, animasi, video, suara termasuk speech dan interaktif.

Definisi lain mengenai Multimedia
o  Industri  Elektronika:  Kombinasi  komputer  dan  Video  (Rosch,  1996)  atau merupakan  kombinasi
tiga elemen  yaitu suara, gambar dan  teks  (McCormick, 1996) atau kombinasi dari paling sedikit
dua media  input atau output dari data, media  ini dapat berupa audio, animasi, video,  teks, grafik
dan  gambar  (Turban dkk, 2002)  atau merupakan alat  yang  dapat menciptakan  presentasi  yang
dinamis  dan  interaktif  yang  mengkombinasikan  teks,  grafik,  animasi,  audio  dan  gambar  video
(Robin & Linda, 2001).
o  Penempatan  dalam  Konteks  (Hofsette,  2001):  Pemanfaatan  komputer  untuk  membuat  dan
menggabungkan teks, grafik, audio, gambar bergerak (video & animasi) dengan menggabungkan
link  dan  tool  yang  memungkinkan  pengguna  melakukan  navigasi,  berinteraksi,  berkreasi  dan
berkomunikasi.

Ada empat komponen penting dari definisi di atas, yaitu sebagai berikut:
1.  Harus  ada  komputer  yang mengkoordinasikan  apa  yang  dilihat  dan  didengar,  yang  berinteraksi
dengan kita.
2.  Harus ada link yang menghubungkan kita dengan informasi.
3.  Harus ada alat navigasi yang memandu kita, menjelajah jaringan informasi yang saling terhubung.
4.  Multimedia  menyediakan  tenpat  kepada  kita  untuk  mengumpulkan,  memproses,  dan
mengkomunikasikan informasi dan ide sendiri. (M. Suyanto, 2003)

Multimedia dan Ilmu Komputer
Grafik, Visualisasi, Komputer vision, Kompresi data, teori graph, Jaringan dan sistem database.
Komponen-komponen dari Multimedia
Multimedia melibatkan bermacam-macam dukungan berupa teks, suara, gambar, animasi dan video.
Contoh penggunaan dukungan ini antara lain adalah:

o  Video Teleconferencing
o  Distribusi pengajar untuk pendidikan besar
o  Lingkungan pekerja yang co-operatif
o  Pencarian dalam database video dan gambar untuk target objek visual
o  Penempatan objek realitas grafik komputer dan video ke dalam scenes
o  Memasukan suara ke lokasi partisipan video-conference
o  Mengembangkan  fitur pencarian ke dalam video dan memungkinkan penggunan bitrate  tertinggi
hingga terendan, pe-skalaan produk-produk multimedia
o  Pembuatan komponen multimedia yang dapat di-edit
o  Mengembangkan aplikasi yang dapat membuat kembali proses dimana video tersebut dibuat
o  Menggunakan voice-recognition untuk membangun lingkungan interaktif.
Penelitian dan Proyek Multimedia
Peneliti-peneliti  bidang  Ilmu  komputer  terhadap  multimedia,  terdiri  dari  beberapa  variasi  tema,
diantaranya adalah:
1.  Pemrosesan dan pengkodean Multimedia: analisis konten (isi) multimedia, keamanan multimedia,
pemrosesan, suara/gambar/video, kompresi dan lain-lain.
2.  Dukungan  sistem multimedia  dan  jaringan:  protokol-protokol  jaringan,  Internet,  sistem  operasi,
server dan client, kualitas pelayanan (QoS, Quality of Service) dan database
3.  Multimedia tools, end-systems dan aplikasi: sistem hypermedia, user interfaces, sistem authoring.
2
4.  Multi-modal  interaksi  dan  Integritas: web-everywhere  devices, multimedia  education  dan  design
and applications of virtual environments.
Proyek Multimedia Saat Ini

o  Camera-based object  tracking  technology: Pelacakan kontrol objek yang menyediakan kontrol
proses pengguna.
o  3D  motion  capture:  digunakan  untuk  penangkapan  gerakan  visual  aktor  untuk menghasilkan
secara otomatis model animasi realistik dengan gerakan alami.
o  Multiple  Views:  photo-realistic  (video-quality)  synthesis  of  virtual  actors  dari  beberapa  kamera
atau dari kamera tunggal di bawah pencahayaan yang berbeda-beda.
o  3D capture Technology: synthesis of highly realistic facial animation from speech.
o  Specific multimedia applications: contohnya suatu alat bertujuan untuk membantu seseorang yang
mempunyai penglihatan rendah.
o  Digital  fashion: aims to develop smart clothing that can communicate with other such enhanced
clothing using wireless communication, so as  to artificially enhance human  interaction  in a social
setting.
o  Electronic  Houscall  System:  menyediakan  suatu  alat  yang  dapat  memonitoring  kesehatan
pasien yang berada di rumahnya.
o  Augmented  Interaction  applications:  used  to  develop  interfaces  between  real  and  virtual
humans for tasks such as augmented storytelling.
Multimedia & Hypermedia
o  A hypertext system: meant to be read nonlinearly,
by  fol  lowing  links  that  point  to  other parts of  the
document, or to other documents .

o  HyperMedia: not constrained to be text-based, can
include  other media,  e.g.,  graphics,  images,  and
especially the continuous media-sound and video.
The World Wide Web (WWW)-the best example of a hypermedia
application.
o  Multimedia  means  that  computer  information  can  be  represented
through audio, graphics,  images, video, and animation in addition to
traditional media.

o  Examples of typical present multimedia applications include:
−  Digital video editing and production systems.
−  Electronic newspapers/magazines.
−  World Wide Web.
−  On-line reference works: e.g. encyclopedias, games, etc.
−  Home shopping.
−  Interactive TV.
−  Multimedia courseware.
−  Video conferencing.
−  Video-on-demand.
−  Interactive movies.
Multimedia Software Tools

The categories of software tools briefly examined here are:
1.  Music Sequencing and Notation
a.  Cakewalk: now called Pro Audio.
o  The term sequencer comes from older devices  that stored sequences of notes (\events”,
in MIDI).
o  It  is  also  possible  to  insert WAV  files  and Windows MCI  commands  (for  animation  and
video) into music tracks (MCI is a ubiquitous component of the Windows API.)
b.  Cubase: another sequencing/editing program, with capabilities similar to those of Cakewalk. It
includes some digital audio editing tools.
c.  Macromedia Soundedit: mature program  for creating audio  for multimedia projects and  the
web that integrates well with other Macromedia products such as Flash and Director.
3
2.  Digital Audio tools deal with accessing and editing the actual sampled sounds that make up audio:
a.  Cool  Edit:  a  very  powerful  and  popular  digital  audio  toolkit;  emulates  a  professional  audio
studio multitrack productions and sound file editing including digital signal processing effects.
b.  Sound Forge: a sophisticated PC-based program for editing audio WAV files.
c.  Pro Tools: a high-end integrated audio production and editing environment-MIDI creation and
manipulation; powerful audio mixing, recording, and editing software.
3.  Graphics and Image Editing
a.  Adobe Illustrator: a powerful publishing tool from Adobe. Uses vector graphics; graphics can
be exported to Web.
b.  Adobe Photoshop: the standard in a graphics, image processing and manipulation tool.
o  Allows  layers  of  images,  graphics,  and  text  that  can  be  separately  manipulated  for
maximum flexibility.
o  Filter factory permits creation of sophisticated lighting-effects filters.
c.  Macromedia Fireworks: software for making graphics specifically for the web.
d.  Macromedia  Freehand:  a  text  and  web  graphics  editing  tool  that  supports  many  bitmap
formats such as GIF, PNG, and JPEG.
4.  Video Editing
a.  Adobe Premiere: an intuitive, simple video editing tool for nonlinear editing, i.e., putting video
clips into any order:
o  Video and audio are arranged in “tracks”.
o  Provides a large number of video and audio tracks, super impositions and virtual clips.
o  A  large  library  of  built-in  transitions,  filters  and  motions  for  clipseffective  multimedia
productions with little effort.
b.  Adobe  After  Effects:  a  powerful  video  editing  tool  that  enables  users  to  add  and  change
existing movies. Can add many effects: lighting, shadows, motion blurring; layers.
c.  Final Cut Pro: a video editing tool by Apple; Macintosh only.
5.  Animation
a.  Multimedia APIs:
o  Java3D:  API  used  by  Java  to  construct  and  render  3D  graphics,  similar  to  the way  in
which the Java Media Framework is used for handling media files.
  Provides a basic set of object primitives (cube, splines, etc.) for building scenes.
  It is an abstraction layer built on top of OpenGL or DirectX (the user can select which).
o  DirectX : Windows API that supports video, images, au dio and 3-D animation
o  OpenGL: the highly portable, most popular 3-D API.
b.  Rendering Tools:
o  3D Studio Max: rendering tool that includes a number of very high-end professional tools
for character animation, game development, and visual effects production.
o  Softimage  XSI:  a  powerful  modeling,  animation,  and  rendering  package  used  for
animation and special effects in flms and games.
o  Maya: competing product to Softimage; as well, it is a complete modeling package.
o  RenderMan: rendering package created by Pixar.
c.  GIF Animation Packages: a simpler approach to animation, allows very quick development of
effective small animations for the web.

6.  Multimedia Authoring
a.  Macromedia Flash: allows users  to create  interactive movies by using  the score metaphor,
i.e., a timeline arranged in parallel event sequences.
b.  Macromedia  Director:  uses  a  movie  metaphor  to  create  interactive  presentations  |  very
powerful  and  includes  a  built-in  scripting  language,  Lingo,  that  allows  creation  of  complex
interactive movies.
c.  Authorware:  a  mature,  well-supported  authoring  product  based  on  the  Iconic/Flow-control
metaphor.
d.  Quest: similar to Authorware in many ways, uses a type of flowcharting metaphor. However,
the flowchart nodes can encapsulate information in a more abstract way (called frames) than
simply subroutine levels.