The Disruption of Digital Learning: Ten Things We Have Learned

You may also like...

  • Mike Rustici

    If I may be so bold, do you think it would it be fair to summarize the coming disruption to Digital Learning as:

    Thinking like a digital native by using the myriad of tools available to us, both learning-specific and not.

    Putting the leaner experience first by employing design thinking.

    Aligning learning experiences with the goals and context of the learner and organization.

  • Pingback: The disruption of digital learning (Josh Bersin) | | scil-blog()

  • Alex Webb

    I agree that digital learning within L&D departments is key and technology is certainly shifting which large companies can benefit from. But key to this conversation is the ’employee experience’. Not everyone likes to learn from a screen and getting the time to do the learning, as you suggested being 1% of their weekly time, is really hard. I know I learn best out of my usual environment , working with people, being on a course, being taken our of my comfort zone, experiencing a new skills and applying it there and then to business. This is how I learn best as the behaviour change is instilled and therefore practiced back in the work place.

    I do agree with Space Learning. If you can combine great facilitated learning courses (not just training where often little information is retained) with space learning and contact points to continue their development. This for me is the best balance.

  • W. Nema

    Thanks to the author, this article is excellent and forward-looking but I think it is missing a very important aspect of what I call “Business Structure” which refers to business-specific ontologies, taxonomies, and metadata which are necessary to enable effective contextual search. No system in the world is aware of internal corporate org or process structure. This can be created using annotation methods such as tagging. Tagging can be social (open or moderated) or formal which requires workflow approval to serve as a classification scheme. Librarians are usually best at classification and taxonomies. A superset field to both is ontologies which has gained recent attention especially in academia and the semantic web world. Ontologies are simply the business-specific vocabulary including the relationships that describe the entities in the field. For instance, an ontology can be created for something as general as Upstream or as specific as weekly highlights. Once the entities and relationships are defined, ontology reasoners can discover new information that isn’t explicitly stated. For instance, a tool can identify a weekly highlight item at the department level although it was reported at the division level. I see the role of the librarian coming back (prefixed with e-) to manage ontologies, taxonomies, and metadata. This is my own prediction which I haven’t seen much writing about. Tooling alone without the function will accomplish absolutely nothing. This applies equally to Analytics tooling that won’t work without 1) useful data being captured; 2) staff with Data Science skills; 3) business questions to answer (including surrounding business processes).

    I prefer the simple word context to what the author is referring to as “bringing learning to where employees are.” Context should simply apply to: {task, role, project, job}. Learning objects should be tagged with these. This is equivalent to the author’s note on page 13 “map content to different jobs and roles.” Tagging, especially when combined with e-librarian approval, can serve as what the author refers to as “program experience platforms.”

    I disagree with the author’s discussion of adaptive learning only in the context of micro-learning platforms (pg 13). I personally believe the keywords “learner-based”, “adaptive” and “contextual” are the future of L&D because of the massive amounts of information, rate of change, and limited time. The first two cannot happen without a Dynamic Learner Profile which usually resides in HR systems thus dictating full and complete dynamic two-way integration with Learning systems. This requires xAPI-enabled software. Contextual cannot happen without “Business Structure” as indicated in the first paragraph.

    Adaptive applies to content and to assessments. To support the concept of “program experience platforms” where different customized programs can be composed from smaller information object units, each information object must be tagged with the following metadata:
    1. task
    2. job role
    3. learning objectives
    4. objective-specific question bank
    5. difficulty level
    6. user ratings
    7. usage stats
    8. testing performance stats
    If all this data is compiled per information object and information objects were granular enough, then only can the adaptive promise become possible, I believe.

    Item 7) on page 15 about culture cannot be over-emphasized because it is People who usually make it or break it. Everyone nowadays is talking about machines replacing people but human intervention is necessary and cannot be replaced in necessary areas such as mentoring, rating, social annotations, and human usage statistics.

    I find the author’s concept of perceiving employee-produced work (e.g. emails, documents, etc.) as learning material interesting especially if combined with a social rating system. There are many text-mining tools that could certainly add value. Organization of desktop files is largely ignored worldwide from what I’ve seen although some enforcing taxonomies on network drives with proper tooling could contribute to Knowledge Management.

    Finally, I agree Microsoft HoloLens holds a lot of potential for Industrial Training. All future learning content should have the following characteristics:
    1. Meet the 8 metadata elements above
    2. Mobile first
    3. Platform-Neutral: browser-based when adequate; native mobile apps otherwise
    4. xAPI and SCORM compliant
    5. Automatically published usage, rating, and performance stats

  • Dave Lee

    Thank you for this Josh. I had to set it aside until I could find the time digest all of what you’ve shared in this single post.

    Your take on foundational/structural trends (versus the marketing-spin trends most often discussed) is powerful. L&D, if it survives this transformational period will be radically different than it is today. You’ve provided a great roadmap to that future.

    I do agree with W. Nema’s comment on what he calls the “business structure”. While it falls under our practices of “business analysis” and “needs assessment”, I agree that our ability to fully integrate into the IT, workflow process, and cultural structures of organizations will be a major contributor to our success. Failure to do so will be our doom. As W. Nema points out, I think this fits into your last trend of the re-education of the L&D profession.

    As an affirming reaction to Mike Rustici’s comment, we need to accept that learning will be defined by the social network (and the tools we use to access it) and we better learn how to use it. As you point out, we need to view learning in a very similar way that social marketing views the marketplace.

  • David Martz

    This is a terrific, comprehensive overview of the evolution in the elearning space. With the incredible ability to instruct at scale, the critical piece I didn’t see is the ability to assess learner’s capabilities at scale. We can measure simple knowledge with multiple choice tests, but what really matters is the ability to apply the right knowledge at the right time to solve complex real-world problems. At Authess are using AI to chase this holy grail, and I’m curious if there are others in this ecosystem who are wroking on the same problem.

/* Add your JavaScript code here. If you are using the jQuery library, then don't forget to wrap your code inside jQuery.ready() as follows: jQuery(document).ready(function( $ ){ // Your code in here }); End of comment */