Sunday, June 23, 2013

Nine Ways to Reduce Cognitive Load in Multimedia Learning

Mayer and Moreno start out by explaining how words and pictures are processed differently, but both processes take significant cognitive resources.  Most  of these examples are broken down into two channels: visual and audio. If presented with too much information at once, the learner can reach cognitive overload. Most lay people, would see conflicts between these two channels as simple distraction, however the the authors explain why this distraction takes place. 

The paper gives several example of how cognitive overload can happen. It is a designers responsibility to make sure that both channels are engaged equally. If too much information is funneled into one channel, it will lead to overload.  One such example was that eyes cannot look at two different places at once.  The paper talked about a scenario where text and graphics were shown in two different places in the screen at the same times. This is distracting to the learner, because they do not know where they should look. The suggested solution was to take part of the visual task away and make it an audio task by narrating the text. The learner can now solely look at the graphics while using their audio channel to listen to the text, thus reducing the load on the visual channel.

Most of these examples seem like common sense, however, this paper puts it in very scientific terms.  Most people realize that listening to two different narration at once is not feasible, but the reading explains why it is scientifically impossible for the brain to process this type of information.  

Overall, this paper had some useful information that gives designers something to think about when creating properly engaging instruction, by pointing out that balance between channels is key.

What Makes e3 (effective,, efficient,, engaging) Instruction?

This is another writing which further illustrates the importance of the first principles. Merril supports the idea that task-centered, E3 instruction promotes long-term memory retention. The more actively learners participate, the more likely they are to remember and apply the learned skills in the future.

The paper explains the difference between problem based instruction and problem-centered instruction, in that problem centered instruction has a structure which walks the learner through the problem solving process in an engaging manner.


Much of the course types described in this paper are instructor-led online courses, rather than SDEL courses.  It is important to make that distinction, because interaction is greatly effected.  SDEL’s typically have little to no interaction, so motivation and engagement must be obtained in other ways.  Instructor led courses often provide access to things like forums, promote peer interactions and instructor guidance.

Effective Web Instruction - Chapters 5 & 6

While this section brings up some valid points, most of the technologies discussed are out of date. Many do not exist while others have been replaced by more modern alternatives. Since this writing is 11 years old, it is to be expected that the technologies discussed are no longer relevant.

Even though some of the specifics discussed in this section are no longer relevant, there are some underlying principles that have not changes.  A designer should always check the logistics of using web as a medium for instruction.  Is internet available to the students? Will chosen file formats be supported with available browsers and technologies? Be sure to note any special software or hardware requirements. Also consider the limitations of certain computer languages and plan for your needs.  

The “Course Delivery Tools” section seems to be talking about Learning Management Systems (LMS).  Oncourse is a good example of an LMS. While Boling and Frick point out that commercial systems my be expensive, there are now open source solutions as well. MOODLE is a very popular open source (free) LMS solution for those on a tight budget.

Chapter 5 goes on to explain that once the web prototype is created, the same type of testing used on the paper prototype should be completed.

Chapter 6 covers bug testing. One last, but important test that should be conducted prior to the official release. The final instruction should be tested on various browsers and screen/platform resolutions.  

Anything the designers think their users might use to access their course should be considered in testing.  Sometimes suggestions are issued to the learners as to what browser, platform or device will work best with the particular course, but compatibility should be as accommodating as possible.

Since this writing is out of date, it brings up a very good, yet unintentional point for my own instruction.  If I deploy this as a usable course, I will need to keep up with changes in the portfolio sites I talk about, and update the instruction periodically to make sure the information stays relevant. In addition any links used need to be tested from time to time.  Some times it is useful to have a reporting system or at the very least an email link for students to report undetected dead links.  Anytime technology is the subject, the shelf life is greatly shortened and must be properly maintained.

Effective Web Instruction - Chapters 3 & 4

This portion of Effective Web Instruction handbook covers how to prepare for and conduct prototype testing.  Boling and Frick point out that it is important to choose the right testers.  These testers, must not have a mastery of the topic beforehand.  This is necessary to determine if mastery is reached after the instruction is complete. However, just because a person goes from non-mastery to mastery, does not mean the instruction is successful. Some learners can read between the lines, while others cannot.  Instruction should strive to have no holes in order to reach every type of student. Satisfaction and usability are also factors beyond mastery that determine successful instruction.
The chapters go on to explain the benefits of paper prototyping, then further reviews the paper prototyping processes, Mager’s objectives and instructional goals as well as Merrill’s 5 principles.
These sections bring up some important points to keep in mind when pilot testing. 

  1. Don’t forget to have the SME review the prototype, or if you are the SME, have a person knowledgeable on the topic review it to add an outside perspective.
  2. No matter who your tester is, Look for non-verbal cues, and try not to answer questions about the instruction itself.  However it should be noted what questions are asked, and which the observer cannot resolve by examining and exploring the course.

Although there is some very useful information presented in this hand book, I do now always agree with every thing the authors say.

 “In our experience, conscientious testing generally leads to at least one major revision of a design – and even minor revisions can lead to scrapping electronic files that are easier to recreate than to
revise “ 

This stament may have been true at the time of this writing, however over the past 11 year, modern design packages, and coding techniques along with experienced designers allow for easy changes. Custom design of any kind always includes modifications and changes at many point in the process, this should be expected and planned for. When set up properly, a designer will create a site or project in such a way that will usually be easy to modify down the road. I am not against true wire framing, (non interactive, static block layout of a design), to make sure the client and users support and understand a layout structure, but I am not convinced it need to be done on paper for proper testing. For me, I am much more comfortable blocking something out in Photoshop (no color, graphics or detailed content, just placement and structure, and navigation) than I am in paper.  But that is me.  Other designers who do not focus on software use, would probably be much more comfortable with paper prototyping. Essentially the process is the same, (by testing at an early stage) just a different medium. However, no matter the medium, the detailed user experience test is a necessary process to successful instruction.  It will give the designer great insight as to how users think and the process they use when navigating and completing the instruction.

Because my activities and task are web released(creating an online portfolio), I found it hard create a detailed paper prototype and  test such as the one described in the this reading.  I was mainly testing structure and navigation to make sure the sections and layout made sense to the learner, in that respect, things weh well.  However for a more detailed instruction, I can see how more detaliled proptypes and tests would be necessary.


One particularly useful point I found was how to gage student satisfaction.  This section describe what questions should be asked in order to determine how satisfied the student was with the instruction. These would be great for surveys.

Wednesday, June 12, 2013

Making a paper prototype

Paper prototyping is a totally new concept for me.  I have done static wire framing, mockups and usability testing, but nothing quite this.  The reading gave many ideas of how to do this type of prototyping, but not a lot of reasoning as to why.  Perhaps this is in another part of the book. I am sure there are situations where this type of prototyping and testing are the norm, but it is very new to me. Nevertheless, I am sure this will prove to be an interesting experience.

The author goes into great detail about different materials and techniques used to simulate navigation and operation of an online course.  Some of the pictures have very detailed prewritten content of what is on the site. However, the author does mention towards the end of the article that detailed content is not always necessary at this stage.  If there are elements that have not been worked out, then skip them.  I feel the main point of this exercise is to test the navigational flow and organization of the site itself.  If the categories and processes make sense, than content details can be added later.  The key is to make sure the very foundation and structure of the site is solid and not confusing user.

The author makes a good point about making sure the intended medium is tested.  If you expect that your users will be using mobile devices to access the course, then this should be planned for in the prototyping phase.  However, often, it is unknown what type of device a user will use.  In this case, I feel planning the prototype for a desktop computer would be best.  Then in the course construction phase, a responsive template can be used to accommodate smaller devices.  In responsive design, there is the option to leave out or add elements in mobile versions versus the desktop counterparts.  While I do not believe doing separate prototypes for various screen/device sizes is necessary, if there are elements that may be different between the full sized, and smaller versions,  this should be brought up during user testing/prototype stage. 

Changes in Student Motivation During...

Student motivation is key in “self-directed e-learning (SDEL)” environments.  Self-directed, in many cases means that students and direct themselves to turn off or leave the instruction if they no longer feel it is useful. The authors suggest using a combination of the 5 principles, ARCS and ALT to fully motivate and engage learners. They suggest using real-world tasks with increasing difficulty in practice.  Although, these methods should ideal be used in most course types, they are especially important in an SDEL environment, because without direct instructor contact, these strategies are how the student will be reached, and made to feel like their learning is important.

One of the most important thing I felt the author mentioned was making sure there are no broken links to instruction. This can be hard to keep track of when using outside resources, but it is imperative.  There is nothing more frustrating to a user then to be in the middle of learning, then to click a link they are directed to, only to find out it is no longer working. If a course instructor or developer has not taken time to make sure their instruction works properly, why should the learner care to continue? They no feel their time is wasted, and will most likely abandon the instruction.

I agree with the conclusion where it states that most people who chose SDEL do so for connivence and time purposes, and go into the course self-motivated.  However keeping them there is where the real art of instruction comes into play.

Effective Web Instruction: Handbook for...

In the first two chapters of this handbook, Frick and Boling discuss the elements of planning web-based instruction. They also say that most instruction of this types lack the type of user testing, changes and revisions that usually go into instructional design.  They main point of the first chapter is to make sure that material is not just created and released. It must be planned, tested, revised and retested to make sure all marks are being hit for all stakeholders involved. The second chapter goes on to talk about instructional goals and indicators, learner analysis and content analysis. All of these elements are imperative when designing a successful online course.

Web instruction comes in many forms, much of which is informal.  Youtoube and Vimeo videos, PDF tutorials, topical websites myriad of other resources to help people learn things are available to the general public.  However the type of instruction this handbook refers to is a formal self-directed course, which the authors believe should fit a specific format and contain all 5 of Merril’s first principles. Most web instruction does not follow the format laid out in this handbook. This is likely because the majority of people who write online courses are not instructional designers.  In many cases, they may be subject matter experts (SME’s) who use digital tools to transfer their knowledge to others in the form of web instruction. While they may be experts on the content, they may have no experience of how to successfully plan or write successful instruction. This may work the informal learning resources, where learners are simply trying to fill in gaps of their existing knowledge.  However if, for instance, a company is trying to implement a self-directed program for their employees to complex learn job-related processes and procedure, than more planning and consideration should go into this type of learning.


I think these chapters bring up some great points that are often overlooked.  Many might be surprised how many people and companies consider reading topics and a multiple choice quiz at the end, proper online instruction, however, Frick and Boling point out that it can, and should be so much more. 

Monday, June 10, 2013

5 Star Instructional Design Rating

This particular rating system is used to determine how effective instruction is when compared to Merrill’s 5 Principles.  There are 5 main topics with 3 subtopics each. Each of the main topics is work 1 star, while the subtopics determine the type of start (Gold, Silver or Bronze). This ratings system does not work on lectures, nor does it work for information with a simple pop quiz at the end. The whole point of this rating, for an instruction to correctly incorporate the 5 principles.  Therefore, the more interactive and task oriented the instruction is, the higher it will score.  


I chose to evaluate the “Learning Spanish” example listed in the course schedule. As with any rating system, I felt it is somewhat subjective, because not all (in fact, most) instruction will not follow Merill’s format.  Therfore some subtopics may be only paritallly covered, making the lines fuzzy as to which typ of start to rate that topic. However, the follow is how I intrepreted this rating system in regards to the example instruction.

5 start rating of “Learning Spanish

Preface: Let me start by saying that much of this instruction is incomplete.  Many of the modules lead to broken links or unfinished information.  Other modules simply lead to a list of resource sites.  The only module which an truly be evaluated, in my opinion, is the “Vocabulary” module, because this is the only one which seems to include both information and practice. Therefore this rating will be based on that module alone.

1. Is the courseware presented in the context of real world problems? - BRONZE
  1. Does the courseware show learners the task they will be able to do or the problem they will be able to solve as a result of completing a module or course? No.  Simply showing a list of words and traslations doesn’t support realworld problem solving.  
  2. Are students engaged at the problem or task level not just the operation or action levels? No.  There are simply word lists to study, and then games to play to practice memorization.  I would consider this more action than task or problem solving.
  3. Does the courseware involve a progression of problems rather than a single problem? Yes.  This on is a bit of a judgement call. If the practice games were for this instance considered problem solving, they do seem to get harder as the learner goes down the line. 

2. Does the courseware attempt to activate relevant prior knowledge or experience? - SILVER
  1. Does the courseware direct learners to recall, relate, describe, or apply knowledge from relevant past experience that can be used as a foundation for new knowledge? Yes - This is another fuzzy line. Although it does not directly address prior knowledge.  The student has the personal option to skip words in the study list that they already know.  Any prior knowledge to listed words would help in the practice sections.
  2. Does the courseware provide relevant experience that can be used as a foundation for the new knowledge? No.  I don’t believe that the study list counts as relevant experience. 
  3. If learners already know some of the content are they given an opportunity to demonstrate their previously acquired knowledge or skill. Yes. They will ultimately be better at the practice games.

3. Does the courseware demonstrate (show examples) of what is to be learned rather than merely tell information about what is to be learned? - SILVER
  1. Are the demonstrations (examples) consistent with the content being taught? No.  Only examples are provided.  There are no non-examples or any other qualifying criteria supplied.
  2. Are at least some of the following learner guidance techniques employed? Yes.  Learners are directed to relevant information in the vocabulary lists. 
  3. Is media relevant to the content and used to enhance learning? Yes.  This is hard to tell without objectives, but if the objective is for the learner to memorize the translation of certain words, then yes it is relevant, and does somewhat enhance learning.


4. Do learners have an opportunity to practice and apply their newly acquired knowledge or skill? - SILVER
  1. Are the application (practice) and the posttest consistent with the stated or implied objectives Yes. The Information in the practice requires learners to recall or recognize information.
  2. Does the courseware require learners to use new knowledge or skill to solve a varied sequence of problems and do learners receive corrective feedback on their performance? Yes. Thee are different kinds of games for the word lists, that in most cases provide corrective feedback.
  3. In most application or practice activities, are learners able to access context sensitive help or guidance when having difficulty with the instructional materials? Is this coaching gradually diminished as the instruction progresses? No. Students only have access to the listed guides in the vocabulary section, but nothing within the game.
5. Does the courseware provide techniques that encourage learners to integrate (transfer) the new knowledge or skill into their everyday life? - NONE
  1. Does the courseware provide an opportunity for learners to publicly demonstrate their new knowledge or skill? No.
  2. Does the courseware provide an opportunity for learners to reflect-on, discuss, and defend their new knowledge or skill? No. These types of games do not allow for discussion or reflection.
  3. Does the courseware provide an opportunity for learners to create, invent, or explore new and personal ways to use their new knowledge or skill? No.

Overall rating: 4 stars, including 3 silver and 1 bronze

I feel this rating system is lacking in many areas.  Although this particular instruction does (sometimes barely) meet the qualification for the subtopics, I do not think it is good, real world instruction.  There are no real objective, real-world practice, or assessment.  So while the Merrill system might rate it as mediocre, a rating system by someone like Mager would not be so kind.



Mager’s Tips on Instructional Objectives

Learning objectives are the whole basis around which instruction is written. When an objective is written properly, it is easier to align with the rest of instruction such as the explanation of material, practice and assessment.

Proper objectives are broken down in 4 parts:
    • Audience - who will perform the objective?
    • Behavior - What will the audience be doing?
    • Condition - what are the circumstances under which the behavior will take place?
    • Degree - To what degree (how many times, how completely, etc.) should the behavior be completed to be deemed successful.

Although all are imperative to creating a complete object, usually the most emphasized area is behavior. This describes the main intent and the action expected.  Sometimes the are the same thing, sometimes they are not.  Mager explains that if a main intent is observable (overt) then it will also serve as the action or indicator. However, if the main intent is something that not observable (covert), then an action should be added, to clarify that the main intent had been achieved.

For the most part, I felt this was very to the point.  I was able to come up with relatively solid objectives after reading this.  However, writing objectives Mager’s way, takes practice.  Everything should be explained as concisely as possible with as little wiggle room as possible. This is not always as easy as it sounds, and sometimes takes an outside eye to point out areas that can be tightened up.

The only major complaint I have with this reading, is that the “Pitfalls” section seemed to be full of non-examples and could have used more correct examples.  One of my pet peeves is being told what not to do, without being given a good example of what to do. I realize this is not always feasible in every case, but I think more effort could have been placed here in this situation. 

Overall, while obviously not as detailed as the Measuring Instructional Results book, the tips summary is a handy reference to getting started in writing proper objectives. And above all, as in anything else, practice makes perfect!



Sunday, May 12, 2013

Week 1 - Prescriptive Principles of Instructional Design


Merrill’s first principles of instruction are five basic concepts that can be found in almost every instructional model, although not always all at once. These principles are listed as:
  • “Task-centered approach”
  • “Activation principle”
  • “Demonstration principle”
  • “Application principle”
  • “Integration principle”

Merril concludes that each of these principles must be included in instruction in order for learning to be successful. The article goes on to compare these principles to existing frameworks, and also, how they apply to e-learning and media-based instruction. 

In essence, these principles are using repetition of a task or concept by aligning the objectives to a pre-assessment, a ‘tell, show, do’ outline, and then helping the learner apply these new skills to a real-world situation. By using this type of repetition in different ways, it cements the objectives and makes them more relevant and memorable to the learner.  

If applied successfully, these principles should work in both face-to-face instruction as well as media-based instruction or e-learning. Personally, I feel the biggest difference to keep in mind when designing self-directed e-learning versus face-to-face instruction, is that learners will not have the opportunity to ask questions.  The instruction has to be very clear and self-sufficient. The “Principles for Multimedia Learning”, and “Principles of e-Learning” seem to do a good job of using the Merrill’s First Principles” and applying them to the self-learner.  However, according to table 14.1, it appears that Clark & Meyer’s e-Learning Principles do not include an “activation” aspect. This is where a pre-assessment would come into play. However, in Table 14.2, Allen’s e-learning principles seem to include all of Merril’s 5 principles.

It must be remembered that no model is perfect in every situation, but Merril’s First Principles seem to serve as a constant of what a model should include.