E-learning Curve Blog at Edublogs

E-learning Curve Blog is Michael Hanley's elearning blog about skills, knowledge, and organizational development using web-based training and technology in education

Entries Tagged as 'learning strategy'

Learning Professionals’ Skills 2.0 – Learning Circuits Big Question July 2009

July 2, 2009 by Michael Hanley · Comments Off · e-learning, Learning and Performance Architecture, Learning Circuits Blog Big Question, learning strategy, learning technology, web 2.0

This month’s Learning Circuits Blog Big Question is

In a Learning 2.0 world, where learning and performance solutions take on a wider variety of forms and where churn happens at a much more rapid pace, what new skills and knowledge are required for learning professionals?

As Harold Jarche and Jay Cross have already addressed the “learning” part of the discussion with informative and illuminating posts on the topic I’m going to talk about the business aspect of the “performance” element highlighted in The Big Question.

Now read on…

I strongly believe that to survive and maybe even prosper in these lcbbqleaner economic times, those of us involved in L&D need to understand that we are also business people. As in any enterprise, we are connected to our customers and clients through a variety of sophisticated and interconnecting partnerships: with organizations, with vendors, with the board of directors, with employees, and ultimately and most importantly with learners.

Our product is our special expertise in learning and development (and all that this entails), and our market is more competitive now than it has ever been. I would assert that if, as a trainer, you feel that you are somehow shielded from the realities of business in the early 21st century, you probably won’t have much of a career in five years time.

Enterprises need e-learning. The pace of organizational change in most companies requires a constant refreshing of skills and the continual development of new competencies. In many organizations, not choosing e-learning as a method to deliver key training initiatives usually means it will not be delivered at all. To remain competitive, enterprises need to:

  • Provide continual, up-to-date training and professional development
  • Distributable to knowledge workers across multiple delivery channels
  • Implement scalable training solutions
  • Ensure plan is developed and deployed within a matter of months, rather than quarters or years
  • $$$ Demonstrate economic viability $$$

Learning professionals should heed their organizations’ strategic and business imperatives, align with them, and deliver appropriate solutions to support them. To make this happen, my view is that learning professionals need to have (or should develop) the skills and expertise to perform in the following domains:

Skill

Activity

Communicator

Champions effective approaches to learning

Consultant

Oversees governance and alignment of business and learning strategy

Learning Innovator

Implements best learning solutions based upon appropriate theories, pedagogies and technologies

Learning Technologist

Collaborates with ICT on most appropriate use of technologies for learning

Human Capital Management Strategist

Supports enterprise performance enhancement

Business-savvy educator

Consults with Lines-Of-Business on learning needs

Learning & Knowledge Manager

Develops and maintains organizational knowledge base and training resources

Organizational Change Agent

Builds a learning culture in the enterprise

Sadly none of this is sexy, but it’s what I believe you need to accomplish to be successful in this domain.

In meetings in my organization I have been known say that being a learning & development professional is a bit like running a truck company. It’s my job to get stuff to the people who need it, and to be honest my customers don’t really care how it gets there, once it arrives on time and it good shape. To extend the analogy, I could argue that Web 1.0 e-learning was like a sports car – it looked great and made a big impact wherever it arrived, but it was quite impractical, required a lot of TLC and maintenance, and while it may be high-performing on the (one-way) racetrack of the information superhighway, try maneuvering it around the multi-storey car park of most organizations’ networks.

Web 2.0 is without equal at delivering vast amounts of information. It is an accessible, multiplex environment, so data can move back, forth, left, right – wherever it needs to go. Learning 2.0 leverages this facility exceptionally well, because communication of knowledge, skills, and expertise, is at the heart of training and learning.

Learning professionals who have supplemented their educational expertise with broader business skills have positioned themselves to add value to their enterprise facilitating their organizations’ performance requirements, and their customers’ learning needs. And that is a win-win situation. 

[Read more →]

Tags: ····

3PD Approaches to Evaluation: Discovering Instructional Design 16

June 19, 2009 by Michael Hanley · 1 Comment · ADDIE, approaches to learning, Cognitivism, e-learning, e-learning development, evaluate learning, instructional design, ISD, learning strategy

We’re approaching the 40th anniversary of the first moon landing. I’ve no doubt that there will be a bombard of documentaries, retrospectives, and "why aren’t we there now?" features coming this July, surrounding the big day itself. This will brighten up my summer no end. Despite its Cold War beginnings, I happen to think that the Apollo-era US Manned Space Program represents the epitome of human vision and endeavor.

What has this got to do with instructional design, say you?

Well, read on…

NASA wouldn’t have got to the Moon, or even to the next town, without gimbals. Not only does NASA use gimbals for orienting rocket engines, but also when designing navigational systems and instrument panels. Without gimbals, it would have been very difficult for NASA to find a way to send astronauts safely into space.

A gimbal is a mechanism that helps to keep an object on target: it’s SaturnV_Apollo4 built into the platform’s systems to correct deviations  from a pre-determined goal.

On the Saturn V rocket, for example, gimbals were used to set the rocket at the correct pitch and yaw angles to safely "clear the tower" – that is, not bump into the rocket’s support gantry on lift-off. Later in the flight, gimbals pitched the rocket’s trajectory to align with the Earth’s curve for it’s journey into orbit (rocket’s don’t go "straight up" but rather ascend in an arc until they attain the required altitude).

So what space nerd. What has this to do with instructional design, say you again, losing patience?

In my view, the task gimbals* perform space flight is similar to the role evaluation performs in instructional design.

According to Donald Clark (2009)

Evaluation is the systematic determination of merit, worth, and significance of a learning or training process by using criteria against a set of standards.The evaluation phase is ongoing throughout the ISD process. The primary purpose is to ensure that the stated goals of the learning process will actually meet a required business need. Thus, it is performed during the first four phases of the ISD process.

Indeed, we can see that this strategy is codified in Dick and Carey’s approach (see Figure 1), where an ongoing review process indicated during the first six phases of the process.

DickCarey_Model Figure 1. Dick and Carey’s Model
[Click to enlarge]

Formal evaluations proper are undertaken in steps 7-9 of their model:

   1. Determine the instructional goal
   2. Analyze the instructional goal
   3. Analyze the learners and contexts
   4. Write performance objectives
   5. Develop assessment instruments
   6. Develop instructional strategy
   7. Design and conduct formative evaluation
   8. Revise instruction
   9. Undertake summative evaluation

Dick and Carey (2001) recommend three categories of of formative evaluations to support this process: one-to-one (or clinical) evaluation, small-group evaluation, and field evaluation, but in my view they don’t suggest a mechanism for evaluation per se, as the activities they suggest are standard ethnographical research methodologies. Similarly, while they consider on-going reviews to be a component the their ID model, the research suggests that In her 1989 article Evaluation of training and development programs: A review of the literature, Marguerite Foxon describes herself as "surprised" at the "general" and "superficial" nature of the research undertaken on evaluation, and considered that what was there was "difficult to understand and apply."

She continues:

Where evaluation of programs is being undertaken it is often a ‘seat of the pants’ approach and very limited in its scope. …trainers often revert to checking in the only way they know – post-course reactions – to reassure themselves the training is satisfactory.

If the literature is a reflection of general practice, it can be assumed that many practitioners do not understand what the term evaluation encompasses, what its essential features are, and what purpose it should serve. …Many practitioners regard the development and delivery of training courses as their primary concern, and evaluation something of an afterthought."

She suggests that many practitioners prefer to "remain in the dark," concerned that any actual evaluation will "confirm their [the instructional designers'] worst fears" about the educational quality of the courseware they deliver, with the result that they "choose to settle for a non-threatening survey” of Kirkpatrick Level 1-style trainee reactions.

As we have seen in our look at the Three-Phase Design (3PD, in this model evaluation is not viewed as a post-delivery activity (Sims, 2008 p.5): the nature of Web-based education is such that changes can be made immediately (that is, during Phase 2 – Evaluate, Enhance, Elaborate), as long as those changes don’t affect the integrity of the learning program’s objectives. The second phase can be

"conceptualised to take place during course delivery, with feedback from both teachers and learners being used to modify and/or enhance delivery.

(p5)

Sims and Jones (2003) call this process "proactive evaluation" (see Figure 2).

3PD_Intersections Figure 2 Proactive evaluation in 3PD
[Click to enlarge]

Using this approach, formative "feedbacks" occur between instructor and students during course implementation. The authors assert that this mechanism continues the dynamic collaboration between the members of the development team enhances. The second phase enables

generational changes in the course structure, with emphasis on the production (completion) of resources, and where learners can take a role of research and evaluation assistants. By developing and building effective communication paths between each of these three roles, a shared understanding of the course goals and learning outcomes can be established, thereby minimising and compromise in educational quality and effectiveness.

In my view, (as shown in Figure 3), the evaluation in this model is founded upon recursion. The enhancement process is undertaken by the actors (instructors, designers, and learners) using a strategy similar to the concept of optimal (or dynamic) programming, where complex problems are solved by breaking them down into simpler sub-problems.

3PD_recursion Figure 3 Recursive evaluation in the 3PD Model
[Click to enlarge]

In essence, the enhancement process is repeated until the learning program is considered complete.

Even during the Maintenance Phase, the ongoing process of

gathering and incorporating evaluation data caters for the sustainability of the course.

(Sims, 2008 p.6)

Unlike the Dick and Carey and Kemp Models, 3PD supports overlapping roles, skills, and responsibilities. These contributions may well change through the lifecycle of a learning program, as the model promotes and supports the development of instructors and students’ knowledge, skill and experience via the virtuous circle of ongoing collaboration and communication between the actors, and the development of working relationships. The inclusion of learners in the content development process differentiates 3PD from the other models discussed here.

More…

*(Note to hardcore design-heads: this is a metaphor†: I’m not suggesting they’re literally equivalent. Go with it).

†Metaphor (n) -  a figure of speech in which a word or phrase literally denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them (Merriam-Webster Online Dictionary)

___________

References:

Clark, D. (2009). Evaluation in Instructional Design. [Internet] Available from: http://www.nwlink.com/~donclark/hrd/sat6.html Accessed 12 June 2009

Foxon, M. (1989). Evaluation of training and development programs: A review of the literature. Australian Journal of Educational Technology, 5(2), 89-104. [Internet] Available from: http://www.ascilite.org.au/ajet/ajet5/foxon.html Accessed 12 June 2009

Sims, R., & Jones, D. (2003). Where practice informs theory: Reshaping instructional design for academic communities of practice in online teaching and learning. Information Technology, Education and Society, 4(1), 3-20.

Sims, R. (2008). From three-phase to proactive learning design: Creating effective online teaching and learning environments, In: J. Willis (Ed), Constructivist Instructional Design (C-ID): Foundations, Models, and Practical Examples.

[Read more →]

Tags: ···········

Phases of the 3PD Approach: Discovering Instructional Design 15

June 16, 2009 by Michael Hanley · Comments Off · ADDIE, collaboration tools, constructivist learning, e-learning, elearning research and development, instructional design, ISD, Kirkpatrick, learning channel, learning outcomes, learning strategy, learning styles, modes of learning, online learning

The intent of the Three-Phase Development (3PD) Model was to provide a new focus for the end-to-end learning content and evaluation development process, especially for Web-based teaching and learning. As discussed yesterday, a central tenet of 3PD was that course creation could not be viewed as a short-term development process, but rather as a long-term collaborative process which would

generate and evolve into focused communities of practice with shared understanding and a philosophy of continuous improvement

(Sims & Jones, 2003 , p. 18) 

Three-Phase Design is configured to elicit learning content through a three-step process of developing functionality, evaluating, elaborating, and enhancing and maintaining materials, rather than the more traditional systems approach of analyze, design, develop, implement, evaluate. The approach also aims to align the "three essential competency sets" for courseware development – course design, subject matter exposition, and content production – in an integrated fashion rather than as a set of uncoordinated activities.

Rather than process driving development, it is the context of the educational components which determine the members of development teams in a targeted and effective manner. Ideally, these teams would remain for the duration of the project, potentially over a number of semesters.

(Sims, 2008 p.3)

To achieve this goal, 3PD specifies a series of "baselines" (2008 p.4) that align with implementation iterations – the first focusing on building functional and essential course components, the second on enhancement or interactivity, and the third to ongoing maintenance of the courseware (see Figure 1). These three phases of development integrate systems-based methodological approaches to content development, scaffolding of contributors, and quality assurance.

3PD_Baselines

Figure 1: Three-Phase Design & Scaffolding (after Sims & Jones, 2003)
[Click to enlarge]

According to Sims and Jones, Phase 1 is a predelivery mode, which involves the gathering and preparation of web-based teaching resources, learning channel, specifying assessment-based outcomes, preferred teaching modality, and learning/learner activities designed to attain the prescribed outcomes. Three-phase Design enables a teacher with minimal experience in Web-based  training and learning environments to access "functional learning structures" (Sims, 2008 p.4) and in-team expertise from the Developers and the Educational Designers in the group.

Phase 2 (Enhancement) is the delivery stage in 3PD. The asynchronisity of digital network supported learning, and the object-oriented nature of e-learning is such that modifications can be implemented in courseware on an ongoing basis (for example to take account of new learning materials or new knowledge) to enhance the student’s ability to achieve the learning objectives. The second phase can be in this way to take place during course delivery, with Kirkpatrick Level 1 and Level 2-style feedback from both instructors and learners being used to modify and/or enhance delivery either continuously. or in a staged manner. For example modifications may be implemented before the beginning of each new semester, based upon the reactions of learners who took the course during the previous semester.

The third stage of 3PD – the maintenance phase – occurs during the "main sequence" (to borrow a term form astronomy) of the course lifecycle. In time, a course will attain a stable state where the teaching strategies and learning activities are working effectively, it’s materials are up-to-date, and the course is taken by sufficient number of learners to make delivery and maintenance cost-effective for the host institution.

Sims (2008) considers that:

The implications of applying the 3PD model is that the original functional system will always be subject to change, and that development environments need to schedule resources for the life-time of that course. The continual process of gathering and incorporating evaluation data caters for the sustainability of the course.

(p.6)

Phase 3 provides an opportunity for a rigorous quality assurance process to be undertaken, and for stakeholders in the course development project to consolidate the instructional design and collaborative skills acquired during the 3PD process: ideally these skills are then applied to the development of a new learning program, where they continue to be refined, with remediation taking place as necessary.
___________

References:

Sims, R. (2006). Beyond instructional design: Making learning design a reality.Journal of Learning Design, 1(2), 1-7. Internet: Available from: http://www.jld.qut.edu.au/ Accessed 3 June 2009.

Sims, R., & Jones, D. (2002). Continuous Improvement Through Shared Understanding: Reconceptualising Instructional Design for Online Learning. Proceedings of the 2002 ascilite conference: winds of change in the sea of learning: charting the course of digital education. Internet: Available from: http://www.ascilite.org.au/conferences/auckland02/proceedings/papers/162.pdf Accessed 3 June 2009

Sims, R., & Jones, D. (2003). Where practice informs theory: Reshaping instructional design for academic communities of practice in online teaching and learning. Information Technology, Education and Society, 4(1), 3-20.

Sims, R. (2008). From three-phase to proactive learning design: Creating effective online teaching and learning environments, in J. Willis (Ed), Constructivist Instructional Design (C-ID): Foundations, Models, and Practical Examples.

Sims, R. Analysis of Three Instructional Design Models. Internet: Available from: http://www.de-research.com/PhDFinalPapers/CT_3IDModels.pdf Accessed 1 June 2009

[Read more →]

Tags: ··········

E-Learning Adoption in Organizations 3: Stages of Diffusion

November 20, 2008 by Michael Hanley · Comments Off · diffusion of innovation, elearning adoption, innovation, learning strategy, organizational development, organizational learning

According to Everett M. Rogers, people’s attitude toward a new technology is a key element in its diffusion. Roger’s Innovation Decision Process theory asserts that innovation diffusion is a process that occurs over time through five stages:

  1. Awareness
  2. Interest
  3. Evaluation
  4. Trial
  5. Adoption

Awareness
At this first phase in the diffusion process, individuals or organizations become aware of a new idea or technology, but lack detail about it. For example, they may be aware of it’s name (i.e. e-learning) or the underlying technology (Web-based content delivery), but not know how how this manifests itself, or how it works.

Interest
At this point, the individuals or organizations want to know more about the concept or technology: what it is, how it works and it’s potential. This can be understood to be the “WIIFM” (“what’s in in for me?) stage, as the potential user investigates how it may enhance productivity and performance, or revenue generation, for example.

Evaluation
The next cognitive process concerns assessment; the individual or organization mentally “tries out” the idea or technology. The information attained in the previous is applied to their particular circumstances.

Trial
If the diffusion is deemed to have some potential, the individual or organization will try it out. Typically, this is a small-scale pilot implementation which provides specific information about how the solution aligns with the individual’s or organization’s requirements. According to Bohlen and Beal (1957),

…individuals need to test a new idea even though they have thought about it for a long time and they have gathered information concerning it.

(p.2)

Adoption
The final stage in the cognitive path is adoption. The phase is characterized by large-scale continued use of the idea or technology, and by “satisfaction with” (p.2) the idea. This does not mean that the the individual or organization that has accepted the idea will use it constantly, rather, it means that the diffused idea has been integrated into their schema or metal model as a valuable asset or resource.

Individuals or organizations will typically go through these processes at varying speeds, depending on factors ranging from the cost, time, and effort required to implement the diffused concept, the return on the investment, how well it aligns with their previous experience with similar concepts, as well as the complexity of the idea or technology under consideration.

More…
______________

References:

Bohlen, J. M., Beal, G. M. (1957) The Diffusion Process, Special Report No. 18 (Agriculture Extension Service, Iowa State College) 1: 56-77. [Internet] Available from: http://www.soc.iastate.edu/extension/presentations/publications/comm/Diffusion%20Process.pdf [Accessed 3rd November 2008]

Rogers, E. M. (2003) Diffusion of Innovations, 5th ed.. Simon & Schuster International.

[Read more →]

Learning Evaluation and Strategy: Using an e-learning readiness survey

April 14, 2008 by Michael Hanley · Comments Off · assessment, data collection, e-learning readiness survey, evaluate learning, guidelines, Horton, learning strategy, non-formal learning, quantitative data, Rosenberg

During this series of posts on evaluating non-formal learning programs, I have mentioned carrying out an e-learning readiness survey without characterizing or discussing how to implement such a research instrument.

This was deliberate; in my view e-learning readiness surveys represent an alpha and an omega of evaluation: on one level they are the starting point for any evaluation of an organization’s learning initiative, on another level they define an organization’s ability to implement an effective e-learning strategy. As such, they are a bridge between the theory and the practice of implementing a learning program.

Readiness surveys enable the learning practitioner to understand and measure both the effectiveness of organizational learning and to identify the critical factors for success when developing learning programs.

I have found Marc J. Rosenberg’s E-Learning Readiness Survey to be a very effective instrument to evaluate both the effectiveness of an organization’s learning strategy, and one of the foundations of any serious evaluation of the effectiveness of learning programs.

The questions are grouped into seven areas of understanding:

  1. business readiness
  2. the changing nature of learning and e-learning
  3. value of instructional and information design
  4. change management
  5. reinventing the training organization
  6. the e-learning industry
  7. your personal commitment

The questions provided in this survey represent some of the most important strategic issues organizations face when transitioning to e learning. Certainly there are additional questions and issues that deserve attention; add your own, organization-specific items as required.

Downloads:

Marc J. Rosenberg’s E-learning Readiness Survey

References:

Rosenberg, M. J. (2006) Beyond e-Learning. San Francisco, CA: John Wiley & Sons, Inc.

[Read more →]

Learning Evaluation and Strategy: Using an e-learning readiness survey

April 14, 2008 by Michael Hanley · Comments Off · assessment, data collection, e-learning readiness survey, evaluate learning, guidelines, Horton, learning strategy, non-formal learning, quantitative data, Rosenberg

During this series of posts on evaluating non-formal learning programs, I have mentioned carrying out an e-learning readiness survey without characterizing or discussing how to implement such a research instrument.

This was deliberate; in my view e-learning readiness surveys represent an alpha and an omega of evaluation: on one level they are the starting point for any evaluation of an organization’s learning initiative, on another level they define an organization’s ability to implement an effective e-learning strategy. As such, they are a bridge between the theory and the practice of implementing a learning program.

Readiness surveys enable the learning practitioner to understand and measure both the effectiveness of organizational learning and to identify the critical factors for success when developing learning programs.

I have found Marc J. Rosenberg’s E-Learning Readiness Survey to be a very effective instrument to evaluate both the effectiveness of an organization’s learning strategy, and one of the foundations of any serious evaluation of the effectiveness of learning programs.

The questions are grouped into seven areas of understanding:

  1. business readiness
  2. the changing nature of learning and e-learning
  3. value of instructional and information design
  4. change management
  5. reinventing the training organization
  6. the e-learning industry
  7. your personal commitment

The questions provided in this survey represent some of the most important strategic issues organizations face when transitioning to e learning. Certainly there are additional questions and issues that deserve attention; add your own, organization-specific items as required.

Downloads:

Marc J. Rosenberg’s E-learning Readiness Survey

References:

Rosenberg, M. J. (2006) Beyond e-Learning. San Francisco, CA: John Wiley & Sons, Inc.

[Read more →]