On the surface, online learning appears data-rich. In reality, the data available is shallow. Page views, clicks, and completion rates rarely explain _why_ learners behave the way they do. Across multiple large programs, I repeatedly encountered situations where useful data did not exist. Instead of designing blindly, I reframed analytics from something passively received to something deliberately designed. These interventions led to: - **Higher engagement with learning resources** - **Lower drop-out rates** - **Simpler, more coherent tool ecosystems** - **Stronger career relevance**, informed by post-program outcomes ### The Problem Learning analytics typically describe _activity_, not _intent_. When signals are missing or misleading, design decisions default to assumptions, and at scale, those assumptions compound. **How do you design effectively when the data you need doesn’t exist?** ### Constraints - Core LMS analytics were limited and non-diagnostic - Programs enrolled hundreds to thousands of learners - Design decisions had to be justified with evidence - Many interventions had to work _before_ learner data was available ### Design Decisions #### 1. Creating data where none existed Optional resources hosted externally were rarely referenced in student work, and Canvas data could not tell us whether learners even clicked them. I partnered with the IT team to route outbound links through a lightweight logging server. This captured click-through behavior without disrupting the learner experience and revealed low engagement, even with **required materials**! Instead of adding reminders or enforcement, I redesigned how resources were presented. I converted static links into interactive flipcards built in Storyline, and each card posed a question or challenge that required interaction to reveal supporting material. **Result:** Click-through rates improved, and instructors observed more frequent and meaningful references to optional materials. #### 2. Delaying decisions until meaningful data existed Early in course development, learner context is often unknown. Locking in examples too early creates misalignment later. I separated course elements into two categories: - **Stable:** concepts, skills, learning outcomes - **Personalizable:** examples, applications, projects Wherever possible, I delayed finalizing personalizable elements until enrollment data or early learner signals became available. When delays weren’t feasible, I designed for flexibility using example banks and context-linked activities. **Result:** Courses launched on time without freezing assumptions. Once learner data emerged, instructors adapted examples quickly. #### 3. Asking learners directly when analytics failed In one large program, enrollments were strong but completion lagged. Platform analytics could not distinguish between content difficulty, tool fatigue, or workflow friction. I introduced a targeted dropout survey alongside the institutional one. Three patterns emerged: - The discussion tool frustrated both learners and instructors - Too many tools were used across courses - Leaving the LMS created unnecessary friction With this evidence, I piloted a single, fully integrated alternative. Although more expensive, it replaced multiple tools and reduced overall complexity. **Result:** Drop-out rates declined, learner satisfaction improved, and instructors reported clearer grading workflows. Learners spent less time learning tools and more time learning content. #### 4. Learning from alumni, not just current students End-of-course surveys capture immediate reactions, not long-term value. To inform program-level design, I spoke with alumni to understand what mattered after graduation. **A clear signal emerged:** external certificates carried disproportionate career value. While degrees covered similar knowledge, certificates made specific skills _visible_ to employers. I identified high-value credentials, negotiated partnerships, and integrated certificate pathways directly into courses. In some cases, overlapping content became an asset rather than duplication, enabling multilingual support through partners. **Result:** Credentials improved employability, partnerships expanded program reach, and design decisions became grounded in outcomes that LMS data could never surface. When useful data is missing, waiting for better analytics isn’t an option. Designing mechanisms to _create_ data—through instrumentation, timing, and direct inquiry—produced clearer decisions and better outcomes. **Measure what actually matters**, even if you have to build the measurement yourself.