Job Search: Curriculum Designer, Curriculum Developer, Instructional Designer, Learning Specialist, Learning and Development Specialist, Learning Consultant, Learning Strategy Consultant, eLearning Developer
##### About Yourself
I’m Pulkit Goyal. I’m fascinated by education, technology, money, and health. I actually started my career in finance. I worked at BCG where my job was to provide "data-driven recommendations to financial institutions". While working there, I had the opportunity to do a lot of instructional design. This was when I didn't even know what ID was. I taught foundational finance to new hires, wrote newsletters that explained complex finance, and I designed a 7 day training program for a tool that we paid $30000 a year for that no one used. These projects deepened my interest in how people learn. And consulting really trains you in problem-solving, which is the biggest part of any job. Since then, I got a Masters degree in learning design, I’ve designed hundreds of programs for thousands of learners, and solved a lot of important problems for learners, SMEs, and my team, and the organizations I worked with. I'm currently at ASU, and beyond regular ID work, I lead an AI enablement working group, where we meet to identify opportunities for automation and create smaller-working groups to create these tools.
##### Why this job?
**Situation:** I started my career in management consulting at BCG, then moved into instructional design and learning design roles in higher education, where I got much deeper experience in training, onboarding, enablement, and operations.
**Task:** At this stage, I’m looking for a role that brings those two parts of my background together: structured problem-solving from consulting and practical learning design from my current work.
**Action:** What attracts me to this role is the chance to come back to a consulting-style environment with a much stronger learning and enablement toolkit than I had the first time around. I like working in places where learning is taken seriously, where the work is varied, and where I can keep growing while solving real business problems. Accenture is a place where there are endless opportunities to learn and grow, and for me, this feels less like a pivot and more like a natural return to consulting with sharper skills in instructional design, onboarding, and scalable training. Plus the variety of work I'd be able to do at Accenture would be a very nice bonus.
##### Greatest Accomplishment
**Situation:** Some of our programs have 1000s of learners at the same time, which creates this one specific glitch where certain learners are invited to the program twice. To get access to the course, they have to accept both invitations. This creates a lot of delay if you count on learners to actually accept both invitations. We historically have solved this problem on the backend, by automatically accepting this invitation on behalf of the student. This takes us about 1 minute per student, and adds up to 2 months of work a year.
**Task:** I started an AI enablement task force where we met with two goals: analyze our work to figure out what we can use AI for, and teach AI skills to other people.
**Action:** By creating a place to meet, we were able to work together and create an automation that controlled Canvas and accepted the invitations.
**Result:** The entire process was automated. 2 months of work was completely automated with the help of AI. We saved 200 thousand dollars a year just for our team.
##### Handle Conflict
I handle conflict by grounding discussions in real behavior and focusing on solving the actual bottlenecks—not just the perceived one.
**Situation**: We were receiving consistent client feedback that the quality of our financial research data was lacking. A Subject Matter Expert (SME) was brought in to design a training program for our team. However, we had a fundamental disagreement on the approach.
**Task:** The SME believed that teaching employees how to distinguish between low- and high-quality data would naturally lead to better outcomes. Based on my direct experience interviewing researchers, I disagreed—I believed the issue wasn’t awareness, but friction. Researchers already knew what high-quality data looked like, but avoided using it because the [Bloomberg Terminal](chatgpt://generic-entity?number=0) was complex and intimidating. My responsibility was to ensure the training actually solved the root problem and improved data quality.
**Action:** Instead of pushing back abstractly, I grounded my perspective in evidence from user interviews and explained that behavior wouldn’t change unless we reduced the barrier to accessing high-quality data.
I collaborated with the SME but shifted the training design toward practical enablement:
- Built hands-on training focused on navigating Bloomberg’s interface
- Taught exactly where and how to find high-quality data within the platform
- Created clear workflows and shortcuts to reduce friction
- Added support mechanisms for when researchers got stuck
This reframed the training from “what good looks like” to “how to actually do it efficiently.”
**Result:** The impact was significant:
- Bloomberg usage increased from ~1.5 FTEs to 4 FTEs
- Data quality improved, as reflected in client feedback
- Turnaround times decreased because researchers became more efficient
- Despite increased use of a premium database, overall labor costs went down due to higher productivity
Most importantly, the conflict led to a better solution because I focused on addressing the real constraint rather than just aligning with the initial approach
##### Prioritization
**Situation:** As an instructional designer, I regularly manage multiple projects with overlapping and sometimes conflicting deadlines. The workload includes a mix of short tasks, long tasks, meetings, and dependencies on other team members.
**Task:** I needed to effectively prioritize my work to meet deadlines, stay productive across varying schedules, and ensure that dependent tasks involving other team members were initiated on time.
**Action:** I prioritize tasks based on three key factors: the time required to complete them, their deadlines, and who is responsible for completing them. On days with many meetings, I focus on smaller tasks that can be completed in between. On days with fewer interruptions, I block time for more complex, longer tasks. Tasks with earlier deadlines are addressed first. For work that depends on others—such as video production or design—I communicate requirements as early as possible so those tasks can start in parallel. When workload becomes overwhelming despite planning, I proactively reach out to team members for support. In leadership situations, I’ve found that team members are often willing—and even eager—to take on additional or more complex work beyond their usual responsibilities. I leverage this by delegating thoughtfully and ensuring clarity in expectations. I also make it a point to reciprocate by supporting others when my workload allows.
**Result:** This approach helps me stay organized, meet deadlines more consistently, and minimize delays caused by dependencies. It also strengthens team collaboration and morale, as proactive communication and mutual support create a more balanced and efficient workflow.
##### Learning Needs
**Situation:** At my first role in a university research support team, we started receiving consistent feedback that the data used in research outputs was often low quality, which was affecting the credibility of the analysis.
**Task:** I was responsible for identifying whether this was a training issue and, if so, defining the right learning intervention to improve data quality.
**Action:** Initially, we assumed this was a knowledge gap—that researchers didn’t understand the difference between high- and low-quality data. But instead of jumping straight into training, I validated that assumption.
- I analyzed patterns in the research requests and outputs to see where low-quality data was being used
- I conducted interviews with researchers to understand their decision-making process
- I reviewed the tools and databases available to them
Through this, I identified that the issue wasn’t a lack of understanding of data quality. Researchers actually knew what good data looked like. The real problem was behavioral and contextual:
- For certain financial research requests, the highest-quality data existed in a specific database
- That database had a complex and intimidating interface, so researchers avoided using it
So the gap wasn’t _“they don’t know quality”_—it was _“they don’t know how to effectively use the right tool under specific conditions.”_ Based on this, I reframed the learning need and designed a targeted intervention focused on:
- Navigating that specific database
- Quickly identifying when it should be used
- Reducing friction in accessing high-quality data
**Result:** After implementing the targeted training, we saw a noticeable improvement in the quality of data used in those specific research cases.
##### AI
**Situation:** In the field of instructional design, emerging technologies—especially AI—are rapidly transforming how content is created, delivered, and experienced. I’ve observed a growing need to not only adapt to these changes but actively integrate them into both design workflows and learning environments.
**Task:** My goal has been to leverage AI to improve efficiency across the entire instructional design process, support collaborators like subject matter experts (SMEs), and enhance the learning experience for students through more responsive and accessible tools.
**Action:** I’ve made AI a core part of how I work by using it to accelerate every stage of instructional design—from content creation to formatting and iteration. I also guide people that I work with to use AI tools. I know a lot of people are hesitant to use AI, but I think the best time to introduce them to AI is when they have a lot of work and they're feeling the pressure. On the learner side, you now start thinking about immediate feedback and the personalization that AI can provide. In live learning environments, I see AI as a parallel support system, where students can ask questions and get answers in real time without interrupting the instructor. This integration extends beyond productivity—it reshapes how both teaching and learning happen.
**Result:** AI has significantly increased my output and reduced the time required for most tasks, allowing me to handle more work with fewer bottlenecks. In fact, the reduction in task duration has fundamentally changed how I approach prioritization—many tasks that once competed for time can now be completed efficiently without conflict. Overall, this has enabled a more scalable, responsive, and effective instructional design process for both creators and learners.
##### Greatest Strength
**Situation:** Instructional designers often work across a wide range of projects with different clients, subject matter experts (SMEs), and content areas. In many cases, the designer may have limited or no prior knowledge of the subject matter and must navigate varying workflows, tools, and expectations.
**Task:** The challenge is to remain effective and collaborative in these diverse situations while ensuring that projects move forward smoothly and stakeholders feel supported. This requires not just technical skill, but the ability to adapt quickly to different people, tools, and contexts.
**Action:** I’ve developed adaptability as my core strength by intentionally meeting SMEs where they are, rather than forcing a standardized process. For example, while I typically prefer working in Google Docs, I readily switch to tools like Microsoft Word if that’s what the SME is comfortable with. Sometimes the SMEs dont want to work with templates that you provide, and I'm always willing to adjust templates to their needs. I approach each collaboration as an opportunity to learn new tools, adjust my workflow, and become a more dynamic instructional designer.
**Result:** This adaptability has allowed me to build very strong working relationships, even with people who are said to be hard to work with. It also enables me to consistently deliver results across a wide variety of projects, reinforcing adaptability as both my greatest strength and a critical skill in instructional design.
##### Mistakes
**Situation:** While designing discussion workflows for a large, fully online graduate program, I initially tried to solve the limitations of asynchronous and synchronous discussions by integrating early AI transcription tools like Otter.ai. However, I made the mistake of adopting AI too early, before the technology was mature enough to reliably support scalable, high-quality discussion workflows.
**Task:** My goal was to preserve the value of live, synchronous discussions while making them scalable and easy for instructors to evaluate. I needed a solution that reduced instructor workload, maintained student engagement, and avoided adding unnecessary complexity for students.
**Action:** After recognizing that early AI tools were not adequate for consistent results, I adjusted my approach and waited for more advanced, integrated solutions. When Zoom AI Companion became available, I redesigned the workflow to leverage its built-in transcription, structured summaries, and speaker attribution. I also partnered with the university’s Zoom team to enable these features at scale, removing friction for students. This shift allowed students to submit clear, AI-generated summaries of their discussions while instructors reviewed concise, standardized outputs instead of attending sessions or watching recordings.
**Result:** By correcting my initial approach and adopting more mature AI tools at the right time, I was able to create a scalable discussion model that improved student engagement without increasing instructor workload. Discussions became longer and more meaningful, instructors maintained visibility into participation, and the workflow was adopted more broadly across courses. The experience reinforced the importance of timing when adopting new technology—not just innovation, but readiness.
##### Modality
**Situation:** Choosing the right learning modality depends on what learners need to do, not just what content needs to be delivered. Different modalities have different strengths—for example, hands-on skills like operating machinery require in-person or simulation-based training, not purely virtual instruction. I applied this principle at BCG when Bloomberg, one of the most expensive tools in our stack, was significantly underutilized due to usability barriers, limited access, and reliance on alternative tools.
**Task:** My goal was to increase Bloomberg adoption without adding licenses, disrupting client work, or requiring traditional training sessions. Researchers needed to build confidence through hands-on experience, but constraints like a single terminal and time pressure made conventional instructor-led or scheduled training impractical.
**Action:** I designed a just-in-time learning system that combined short video modules with performance support embedded directly into real workflows. Instead of scheduling training, learning was triggered only when a task required Bloomberg. Researchers watched 5–8 minute task-based videos and immediately applied what they learned on the terminal. I supplemented this with lightweight job aids, comparison guides, and troubleshooting resources so they could independently navigate issues. Early on, I reviewed outputs to ensure quality. This mix of video-based instruction and performance support ensured that learning was accessible, contextual, and immediately actionable without interrupting work.
**Result:** This approach led to a 167% increase in Bloomberg usage, reduced task time by 25 minutes, and lowered client costs by 37% for Bloomberg-related work. Demand grew enough to justify approval for a second terminal, and the system sustained itself as trained researchers onboarded others. Feedback was consistently positive, and Bloomberg shifted from an underused expense to an integrated part of the research workflow. This outcome reinforced that the right modality is not fixed—it emerges from aligning learning design with real-world constraints and performance needs.
##### Quality
**Situation:** Ensuring quality in instructional design deliverables requires both structured checks and experience-driven judgment. While standard QA processes—like verifying accessibility, checking links, and validating interactions—are essential, I noticed that repetitive tasks like building flipcards in Articulate Storyline were time-consuming, error-prone, and dependent on individual effort. At scale, this created inconsistencies and unnecessary QA overhead.
**Task:** I needed to maintain high-quality, consistent outputs across a large volume of content while reducing manual effort, eliminating repetitive errors, and ensuring accessibility was reliably met. The goal was not just to check for quality, but to build it directly into the process.
**Action:** In addition to maintaining a standard QA checklist, I leveraged experience to anticipate common learner issues and proactively address them in the design. More importantly, I identified an opportunity to improve quality at the system level by building a flipcard generator. This tool reduced production time from hours to minutes, removed dependency on Storyline licenses, and standardized the output. Crucially, it embedded accessibility requirements—such as keyboard navigation, alt text, and interaction behavior—directly into the generated product, eliminating the need for repeated manual QA. This allowed designers to focus on higher-value work while ensuring consistent quality across all flipcards.
**Result:** This approach significantly improved both efficiency and reliability. Flipcard production dropped from up to several hours to just a few minutes per interaction, while quality became more consistent and less dependent on individual execution. Accessibility was no longer a recurring risk, and QA efforts were reduced. Overall, this shifted quality assurance from a reactive checklist to a proactive, built-in system, enabling faster delivery without compromising standards.
##### Not best practices
**Situation:** In instructional design projects, SMEs sometimes propose approaches that conflict with learning design principles. I encountered this when an SME, accustomed to in-person teaching, wanted to record a 90-minute lecture for an online course. Of course, it depends on the lecturer, but long videos don't really translate very well to online courses in general.
**Task:** My goal was to align the content with effective online learning practices—such as shorter, segmented videos and interactive elements.
**Action:** Instead of pushing back, I took ownership of adapting the content. I cut the long recording into shorter segments, embedded pauses, and added knowledge checks between sections. This allowed me to demonstrate how the same content could be made more engaging without requiring the SME to re-record multiple videos. I recognized that the issue wasn’t the SME’s intent, but their familiarity with a certain workflow—they were used to recording longer sessions and not necessarily structuring shorter ones. To preserve flexibility, I also included the full-length video at the end of the course for learners who preferred consuming it in one go.
**Result:** This approach preserved the SME’s comfort and trust while improving the overall instructional quality. It avoided unnecessary conflict and strengthened our working relationship. By showing rather than telling, I was able to demonstrate the value of learning design principles in a way that felt natural and non-disruptive, making the SME more open to these approaches in future collaborations.
##### Tools
**Situation:** In instructional design, different organizations rely on a variety of authoring tools and learning management systems (LMS), requiring designers to quickly adapt to new platforms while maintaining high-quality output. Over time, I’ve worked extensively across both categories, building a strong foundation in tools like Articulate Storyline, Articulate Rise, Moodle, and Canvas.
**Task:** My goal has been to develop expertise in core tools while staying adaptable enough to quickly learn new ones as project needs evolve. This includes not only building courses efficiently but also ensuring that design quality remains consistent regardless of the platform being used.
**Action:** I’ve invested significant time mastering key authoring tools—creating over a hundred courses in Articulate Rise and spending thousands of hours working in Articulate Storyline to the point where it feels second nature. At the same time, I’ve worked across LMS platforms like Moodle and Canvas to manage and deliver learning experiences. When encountering new tools, I focus on quickly understanding their structure and capabilities. For example, I was able to become comfortable with Storyline in under a week and progressed to leading trainings on it within six weeks. I approach tools as enablers of design rather than limitations, which allows me to transfer my skills effectively across platforms.
**Result:** This combination of deep expertise and adaptability allows me to work efficiently across different systems and quickly ramp up on new tools when needed. It ensures that the quality of my instructional design remains consistent, regardless of the technology being used, and enables me to contribute value almost immediately in new environments.
##### Storyboard
**Situation:** Instructional designers often receive raw, unstructured input from SMEs that isn’t immediately usable for course development. In one case, while designing a physics course, the SME provided only a high-level scenario—a plane flying over an island dropping supplies—but no structured content or clear breakdown of concepts.
**Task:** My responsibility was to convert this minimal input into a clear, engaging storyboard that aligned with learning objectives and guided learners from concept introduction to practical application.
**Action:** I started by anchoring the design in the defined learning objectives and used the scenario as a narrative backbone. From there, I built a structured storyboard: beginning with an introduction to the situation, followed by identifying the key physics concepts required to solve the problem (such as motion and timing). I then sequenced these concepts into digestible learning sections and designed a final application activity where learners used given values—like speed and height—to determine when the supplies should be dropped. This approach transformed a vague idea into a cohesive, problem-based learning experience.
**Result:** The final storyboard provided a clear, engaging learning journey that connected theory to real-world application. It ensured that learners not only understood the concepts but could apply them in context, while also giving the SME confidence that their initial idea had been effectively translated into a structured and impactful course.
##### ADDIE
- Analysis: researcher interviews and barrier identification
- Design: task-triggered learning and workflow prioritization
- Development: videos, guides, job aids, troubleshooting docs
- Implementation: pilot with six researchers and live-work activation
- Evaluation: usage growth, time saved, cost reduction, positive feedback
##### Storyline?
I have almost 4 years of experience with Articulate Storyline. I have used it to create interactive content, presentations, and entire trainings themselves. So, I would consider myself to be familiar with the ins and outs of Articulate Storyline, having spent thousands of hours inside the software.
#Codex Use [[Portfolio/Designing and Building a Custom Flipcard Generator|Designing and Building a Custom Flipcard Generator]] to answer this question.
**Situation:** At ASU, we needed richer interaction than static slides and readings could provide, but the work still had to scale across many courses and programs.
**Task:** I was responsible for using Storyline where it added real instructional value, not just visual polish.
**Action:** I’ve built a wide range of Storyline assets, including flipcards, drag-and-drops, hotspots, tabs, quizzes, and interactive presentations. I’ve used it both for custom interactions and for scalable patterns, and when I noticed repetitive Storyline work eating up too much team time, I built a flipcard generator that standardized a common interaction and baked accessibility into the output.
**Result:** I’m very comfortable with Storyline as both a design tool and a workflow tool. It has helped me build stronger interactions for learners, and in the flipcard project, it reduced standard production time from anywhere between 30 minutes and 5 hours down to about 2 to 5 minutes.
##### Have you used WalkMe/Whatfix or any digital adoption tools?
#codex this is the right answer. Can you add more details by specifying how I recorded the exact steps the user has to take on BLoomberg to get certain data, which is exactly what tools like walkme or whatfix do. Can you also elaborate on this answer by explaining how at ASU I have created several Guru Cards, which is the firms knowledge management database, and often contains walkthroughs of several tools. I have created trainings for several of these tools in a storyboard/branching manner in the way that's very similar to walkme/whatfix.
**Situation:** I haven’t used WalkMe or Whatfix directly in a formal enterprise rollout.
**Task:** What I have done, though, is solve the same adoption problem: helping people use tools correctly inside the flow of work instead of sending them to separate training they won’t remember.
**Action:** At BCG, I built task-triggered Bloomberg training with short workflow videos, job aids, and troubleshooting paths. At ASU, I built browser-based automation and AI support tools that lived inside existing workflows, so people could get help or complete tasks without leaving the system they were already using.
**Result:** Those solutions improved usage, reduced support burden, and shortened time-to-competence. So while I would still need to learn the specifics of WalkMe or Whatfix, the design logic behind digital adoption is already a core part of how I work.
##### How do you ensure adult-learning principles show up in your design?
#Codex use [[Portfolio/One Terminal, Countless Constraints - When a $30,000 Tool Goes Unused|One Terminal, Countless Constraints - When a $30,000 Tool Goes Unused]] to answer this question too. The fact that the entire office didn't have to go through the entire training, and that everyone that needed bloomberg training didn't go through the entire thing, only what they immediately needed to know.
**Situation:** Adult learners usually disengage when training feels abstract, delayed, or disconnected from the work they actually need to do.
**Task:** So my job is to make learning feel immediately relevant, respectful of people’s time, and practical enough to use right away.
**Action:** A good example is the Bloomberg training I built at BCG. Instead of designing a broad classroom-style program, I created short modules tied to real tasks, added job aids for just-in-time support, and made practice happen on the actual terminal during actual work. That meant relevance, autonomy, and immediate application were built into the experience from the start.
**Result:** Usage increased by 167%, people became faster and more confident, and the training stuck because it was embedded in doing rather than separated from it.
##### How do you measure training effectiveness and business impact?
There are a lot of ways to measure impact of a learning intervention. Of course, the learning interventions you create measure learners, but it's also important to measure training effectiveness after the training. Outside of the learners performance in the training, you can measure effectiveness and business impact by measuring performance for the on-the-job skill the training was designed for.
**Situation:** In learning roles, it’s easy to report clicks, completions, and attendance, but those numbers often don’t tell you whether anything actually improved.
**Task:** I try to measure learning in a way that helps me make decisions, not just fill out a report.
**Action:** I usually look at three levels: learner response, behavior change, and operational or business impact. When existing analytics are too shallow, I create better data. At ASU, that meant instrumenting external links, running targeted dropout surveys, and speaking with alumni directly so I could understand what was actually affecting engagement, persistence, and career value. In other projects, I’ve tracked time-to-productivity, support burden, learner satisfaction, and turnaround time depending on the problem.
**Result:** That approach has helped me make better design decisions that reduced dropout, improved learner satisfaction, simplified tool ecosystems, and made it easier to show whether a training change was actually worth scaling.
##### Tell me about a project that went off track and how you recovered it.
Situation: Designing with an extremely busy SME. Training design due in 3 months and SME has time to meet maybe 4-5 times during this time.
**Situation:** I designed a live-discussion workflow for a large online graduate program so students could have real conversations without forcing instructors to attend or review full recordings for grading.
**Task:** The concept was strong, but the early rollout started drifting off track because students had to set up Otter.ai themselves, which added friction and created inconsistent outputs.
**Action:** I recovered it in two steps. First, I created clearer setup guidance and submission expectations so students knew exactly what to do. Then, when Zoom AI Companion became available, I partnered with the university’s Zoom team to move the workflow to a better-supported tool and redesigned the process around transcripts and summaries instructors could review quickly.
**Result:** The discussion model became much easier to sustain, instructors could evaluate participation without extra grading time, and students ended up having longer and more substantive discussions than before.
##### How do you QA learning deliverables before launch?
**Situation:** Before launch, the biggest risk is usually not one major failure but a cluster of smaller issues like broken logic, unclear instructions, accessibility gaps, or weak alignment between activities and outcomes.
**Task:** My goal is to catch those issues before learners do, especially when the course will run at scale.
**Action:** I usually QA in layers. First, I check instructional alignment: are the objectives, content, activities, and assessments actually connected? Then I test the learner experience by clicking through every interaction, reviewing instructions from a first-time learner perspective, and checking navigation, media, and platform behavior. Finally, I do an accessibility pass for captions, alt text, keyboard behavior, contrast, and screen flow. In repetitive workflows, I also try to design QA into the build itself, which is why I created the flipcard generator with accessibility built into the output.
**Result:** That process helped me quality check more than 50 courses at Georgetown, reduce duplicate QA work, and launch materials with far fewer preventable issues.
##### Pushed Back on a client request/unrealistic deadline
**Situation:** At BCG, there was pressure to get researchers using Bloomberg quickly, but the implied ask was to train people broadly while keeping the only terminal available for live client work.
**Task:** I had to push back on the idea of doing a traditional, full-feature rollout because it would have taken too long, disrupted delivery, and still not built real confidence.
**Action:** I reframed the conversation around the actual business goal, which was not total platform mastery but confident use of the highest-value workflows. I narrowed the scope to the small set of tasks that created most of the value, built short just-in-time learning assets, and reviewed early outputs closely so we could move faster without adding risk.
**Result:** That pushback gave us a plan the business could actually absorb, and it led to a 167% increase in usage, faster task completion, and a 37% reduction in client cost.
##### Creative solution when the tool/platform had constraints
**Situation:** At ASU, Canvas had a long-standing issue where pending section invitations broke SpeedGrader in large courses, but neither the vendor nor the institutional process was likely to solve it quickly.
**Task:** I needed a fix that would work inside Canvas, respect student-data constraints, and avoid creating a brand-new system for the team to learn.
**Action:** I studied how the page behaved, reviewed the underlying JavaScript, and built a lightweight browser-based script that reproduced the exact manual steps a designer would normally take: identify students with pending tags, remove them from the instructor section, and add them back. I kept it simple enough to run from the browser console so adoption friction stayed low.
**Result:** The workaround became a practical self-serve tool that saved roughly 250 to 300 hours a year and reduced grading disruption without requiring any vendor changes.
##### Working across time zones or cultures
**Situation:** I’ve done more cross-cultural work than formal global time-zone program management, and one strong example was at the University of Arizona, where I redesigned more than 80 hours of instructional content for Indian learners and helped build partnerships with schools in the Delhi/NCR region.
**Task:** The challenge was to make the learning feel relevant in a different context rather than assuming content designed from a U.S. lens would transfer cleanly.
**Action:** I paid close attention to examples, references, tone, and assumptions that could feel natural in one setting but confusing or distant in another. I also relied on structured asynchronous communication and treated local partners as real design inputs rather than just final reviewers.
**Result:** The content became more usable for the intended learners, the partnerships held, and the experience made me much more deliberate about designing for context instead of assuming one version will work everywhere.
##### Receiving tough feedback and improving the deliverable
**Situation:** When I first launched the onboarding course for new instructional design associates at Georgetown, some early feedback was that it assumed too much insider knowledge and was harder to follow than I thought.
**Task:** Since I had built the course, that feedback was uncomfortable, but I needed to make the deliverable genuinely usable rather than defend the first version.
**Action:** I reviewed exactly where people were getting stuck, created a centralized glossary for internal acronyms and terminology, rewrote sections that relied too heavily on shorthand, and treated the next cohort as a second pilot rather than assuming the work was finished.
**Result:** Later feedback was much stronger, new hires reported more clarity and confidence, and it reinforced for me that tough feedback is usually useful signal if you let it change the design.