Update (9/19/2019): I recently spoke with Elecia and Christopher White from Embedded Podcast about this post and embedded systems education in general. The episode dives into a lot more content than what I wrote about here — be sure to listen if you’re interested!
I’ve been very quiet on the internet lately — I bought a new house, I’ve been working on a ton of projects, and I’ve been (happily) swamped teaching in the Electrical & Computer Engineering department at my university this year.
In the fall, I taught an Advanced Embedded Systems special topics elective, and this spring, I taught the Introduction to Embedded Systems course.
I wanted to write up a few notes explaining how I changed our Embedded Systems class this semester to help others interested in Embedded Systems education.
What is Embedded Systems?
When you think about it, Embedded Systems is one of the strangest classes in an Electrical Engineering curriculum. Most classes taught to freshman and sophomores focus on applying analysis techniques to pre-designed systems:
- Calculate the current going through resistor
- Determine the output impedance of the amplifier
- Find the 3dB frequency of the low-pass filter
- Determine if the given system is causal
At the junior/senior level, students are asked to design/evaluate systems.
- Design an operational amplifier circuit that amplifies a microphone with a 4.7 kOhm output impedance by a factor of 100, and provides an output impedance less than 100 ohms.
- Design a buck converter that can supply a 5V 1A load from a 12V input.
- Design an FSK communication system that can operate at 22 Mbps with a BER of 10^-4.
Embedded Systems isn’t well-grounded in fundamental concepts; rather, it often serves as the application of all the above concepts into real-world systems. And it’s not because embedded systems are the end-all/be-all of electrical engineering — rather, because embedded systems are the simplest real-world examples of these fundamental principles of EE.
Also, while we teach students how to design embedded systems, designing this stuff is more about following the Design Process — not using a formula to select the optimal bias resistor value for an amplifier (which is what most EE students think of when they think of “design”).((In many ways, the Design Process is much more abstract and challenging than typical EE design problems. It’s also much more wishy-washy, which makes many professors uncomfortable teaching it.))
Embedded Systems is messy; messy means memorization.
It is the practical application of fundamental skills where things get messy. That means embedded systems are messy, and Embedded Systems is a messy course. If you think about it, the messier something is, the more that learning outcomes will fall under recall and recognition — rather than analysis and evaluation.
That may seem counterintuitive — shouldn’t messy, complex systems yield (or even require) higher-order Bloom’s stuff? No, quite the opposite.
Think of a microcontroller blinking an LED attached to a GPIO pin. If we used analysis-oriented cognitive processes instead of “lower order” thinking, we couldn’t blink an LED attached to a GPIO without applying KVL/KCL, Ohm’s Law, and nonlinear models of transistors. We’d have to do this for every transistor in the microcontroller. Otherwise, we would have no idea if it would “work” or not.
Of course, we don’t use bottom-level analysis techniques to blink an LED. Instead, we teach students to memorize that a push/pull GPIO output cell will attempt to drive a pin to VDD or GND depending on the GPIO’s register value. We teach students to memorize that GPIO pins can only supply a certain amount of current, so you can’t drive large loads. We teach students to memorize that if their GPIO pin doesn’t work, they probably forgot to make it an output. We teach students to memorize that each GPIO pin is part of a GPIO port, which is controlled by a single register. We teach students to memorize that some MCUs allow you to read and write individual bits in registers, and some architectures don’t, and some only let you modify bits in some registers, but not others. We teach students to memorize the names of the GPIO registers for their specific MCU, and we teach students to memorize the steps necessary to compile, assemble, program, and debug their software.
All of these are abstractions — which are good. Abstract reasoning is good. But let’s not forget that abstract reasoning relies on a ton of memorization and recall. As soon as you abstract something out of your mental workspace, you have to remember how it works.
Embedded Systems education should use pedagogical approaches that are proven winners when it comes to recall-oriented learning.
Matching the Pedagogy with the Learners
Alright, so we’ve established that any practical Embedded Systems course will be heavily rooted in recall, recognition, understanding, and application. And design/evaluation work in Embedded Systems is generally less concrete than in our circuits classes. We need to take these ideas and form a pedagogy that also links up with our students.
In the Electrical & Computer Engineering department at UNL, Embedded Systems is taught at the Sophomore level. Sophomores are old enough to have good relational reasoning ability, but many are too young to have a fully developed prefrontal cortex necessary for proficient high-order abstract reasoning.
From a course scaffolding perspective, sophomore ECE students have taken a one-hour-a-week C programming course, where they seemed to learn to do little more than malloc() character arrays on a UNIX server and print them. These students have not (nor will ever) take a computer architecture course, and they won’t take a digital logic design class until their junior year.
As an educator, I have to be cognizant of this when planning learning outcomes and pedagogy for the course. Here’s what I came up with:
- Use a reverse classroom approach, with online interactive reading assignments. These are the gold standard for low-order learning tasks in modern education circles. This approach allows us to cover a huge amount of content in a short time, so we can get students caught up on architecture, programming, and digital logic at the same time they’re doing embedded development.
- Use project-based learning for higher-order learning. Having students practice conceptual topics with real hardware is critical to seeing how these topics apply to real-world embedded engineering, and it gives students the opportunity to practice the design process.
- Use regular in-class quizzing as a summative assessment. These quizzes are the primary grade component for the students’ official records.
Online Reading Assignments
I experimented with a traditional lecture format when I taught Advanced Embedded Systems last year, and it was extremely inefficient at handling the learning outcomes of the class.
For my Intro course last semester, I decided to use a reverse classroom. Reverse calssrooms are not where students teach the instructor (though that’s cool too!) — but rather, where students do “classwork” (learning) at home, and “homework” (practicing their learning) in class.
A lot of reverse classrooms have instructors record lectures and post them online for students to watch, and then bring questions to class. Instead, I generally use interactive reading assignments that I create on our university’s LMS.
Before each lecture, students work through the assigned reading and answer the built-in questions. In class, we often review the results of the reading assignment to see which questions students bumped on the most. We go through questions students have about the assignment, and then we spend the remainder of class time reinforcing these topics with peer learning exercises, demonstrations, other group work, and tons of question-and-answer time.
For what it’s worth, certain lectures — especially on electronics circuits — demanded a true lecture-style reading assignment, so I ended up creating a few Khan Academy-style videos.
I really try to avoid introducing new material in lectures — more progressed students fall asleep, and emerging learning struggle to keep up. Instead, the reading assignments provided an amazingly efficient way for students to achieve learning outcomes.
The interactivity is key: Over-confident students who would otherwise brush through a reading — skimming over important details, only to be punished on a homework or quiz — are forced to re-read and think about details they omitted when they are asked questions about the material. By interspersing questions in between content instead of having students fill out separate quizzes, the questions serve as a speed bump that forces students to avoid rushing through material.
Plus, when students are finished with their reading assignment, they get instant feedback on their learning progress. This feedback cycle is critical to learning; many students would bring questions to class that were based on reading assignment questions they missed.
This is the whole goal of a reverse-classroom approach; we get to reinforce and strengthen learning in the classroom, instead of trying to do this outside of the classroom.
Course Roadmap: Top-down, JIT, and Corkscrew Learning
We spent our first week block-diagramming real-world embedded systems — mostly focused on consumer electronics products the students see in their own lives. The reading material presented very high-level explanations of a microcontroller, an IC, a circuit board, the basic peripherals inside the microcontroller, and how these things are generally tied together.
You may be surprised how little material students must learn before they can start making good-quality predictions about how products are designed. Two lectures in, students were block-diagramming ovens, gaming controllers, calculators, and electric drills.
Sure, their ovens had heating elements directly attached to DACs, and their electric drills neglected a forward/backward switch, but I was amazed at the early models students constructed in their brains about embedded systems. They know that buttons are hooked up to GPIO pins. They know that many displays require a display controller to sit on a communications bus, and they know that, as one student said, “I2C is cool because you can put multiple sensors and things on it.” Not bad for the first week of class!
Traditionally, instructors of this course presented material bottom-up, topic-by-topic. First, an introduction to binary. Then, a C programming review. After that, GPIO. Then ADCs. Then timers. Then UART/SPI/I2C. Et cetera.
This is extremely convenient for the instructor — they can work out all the learning outcomes relevant to each topic, and pound them out methodically.
Instead, I’m a big fan of JIT (just-in-time) learning, and what I call the corkscrew approach: circling back around through the same material, over and over again, each time digging deeper and uncovering more details (and exceptions to the rule, in the case of embedded systems).
As an example: I wouldn’t feel comfortable graduating a student from my course without them being able to explain output impedance of a GPIO pin. However, I don’t need them to have mastery of this to turn an LED on or off, or as a pre-requisite for operating a UART. We can cover a “first pass” through GPIO, then move onto other peripherals and, later on in the semester, circle back around to output impedance.
So how do you know when to provide instruction for a topic? We do most everything just-in-time (JIT). By the third week, we hadn’t covered conditional statements or binary operators in C, but we’ve worked through pointers (including a challenging reading assignment problem). Why? Because their first GPIO assignment will require that they can modify memory using C programming. And that’s about all it will require. We didn’t get to output impedance until we discussed PWM, since so many PWM applications students consider involve driving high-power loads.
But JIT doesn’t just describe the order of topics; I use it to provide real-time feedback on reading assignment questions. I will often configure our LMS’s “wrong” answers to provide additional instruction to students who misunderstood a concept. That way, answers aren’t simply “wrong” — there’s a learning component built into the question itself. Students are eager to complete the reading and get immediate feedback on what they have and haven’t mastered. This JIT feedback is so much more powerful than traditional paper-and-pencil formative assessments, where the students may get their quiz back a week or more later, with nothing more than a grade and perhaps the correct answers circled.
Don’t avoid details
One fault I see with some embedded systems curricula is to strive for affective engagement by simply throwing in the towel and converting their classes into something that would belong at a makerspace. They end up with course material that has very little relevance to the actual learning outcomes for the course, and instead focus on things like hobby servo motors, 3D printing, and driving WS2812s with Arduino libraries. This is fun stuff to play with — I encourage my students to take up electronics as a hobby. But this stuff doesn’t belong an embedded systems classroom, where we have so little time to cover such a huge amount of information.
I think a lot of instructors end up equating “top down” with “stay out of the weeds” — that’s simply not true, though (otherwise the word “down” would not appear in “top down”).
Because of all the harsh sentiment expressed about “the dirty low-level details” my initial gut feeling was that students would get excited about the high-level stuff and snooze during the low-level details. However, something quite different ended up happening in the classroom.
In the third week of class, I introduced GPIO (and, really, the concept of memory-mapped peripherals) to my students by typing out the bare op-codes of a “blinky” program in the hex editor programming tool, uploaded the code to an 8051 MCU, and ran it in front of the class. I expected quite a bit of snoozing, but instead, many of them were absolutely enthralled.
I could see that the material clicked in their brains instantly. They finally put the pieces together that we had been talking about: all compilers do is turn C code into these primitive machine instructions, and a GPIO port is just a register sitting in memory space somewhere. They know there’s no such thing as a “set pin high” CPU instruction, since they saw nothing but “MOV” and “JMP” instructions.((Here, the 8051 was super useful — the fully orthogonal instruction set let me write an entire blinky program in something like 5 bytes of code. There’s nothing wrong with doing classroom demos using microcontrollers other than the one found in the lab kit — in fact, it can help strengthen learning.))
When a student came to me later that week, unsure of why her breakpoint was “broken” in the lab she was working on, I reminded her to look at the disassembly view, and she quickly noticed, “oh weird, the compiler didn’t emit any instructions for that line of code. No wonder.” Those sorts of realizations don’t usually happen in the third week of an embedded systems class because instructors are afraid of diving into the deep end.
Learners need a concrete understanding of how these systems work at all levels to feel comfortable and confident. They like the details, and they can even get excited by them — just as long as they’ve been primed to see how these details come together to create amazing outcomes.
Us more experienced folks are better at abstraction; we can comfortably and confidently use systems that we don’t fully understand. We have to fight the urge to teach topics that way.
Project-Based Learning
In addition to reading assignments, there’s a project-based learning (PBL) component to my course. This is somewhat comparable to traditional “lab assignments” found in other Embedded Systems courses (and I even refer to them as “labs” in class), but they end up serving different goals and function differently in a pedagogical sense.
The old labs for the class were weighted strongly enough that I’d consider them summative assessments: students were expected to show up to lecture, learn how to use their microcontroller by staring at the lecture slides while the instructor went over things, and then demonstrate their achievement by programming the lab, writing a report, and turning it in for credit.
These labs, which used Arduino, were things like:
- Use Serial.print() to experimentally determine the size of various data types.
- Time a GPIO pulse with a logic analyzer or oscilloscope to estimate how many clock cycles it takes to write a value to a port.
- Read a UART character, “encrypt” it using a shift cypher, and send it back to the computer.
I don’t know how other universities handle embedded labs, but I found these to be very bizarre — they have an oddly experimental/investigative tone to them; they seem to view the microcontroller as a black box, whose properties we can discover only through experimentation ((This may seem similar to other classes in an undergraduate EE curriculum, though it’s missing a big point — in those other classes, students calculate what a property should be, and then experimentally verify. It reinforces to students that we can form models of systems that help us predict behavior without having to actually observe the behavior experimentally.))
Students leave the class thinking the only way of seeing how quickly a UART interrupt can work is by trying it and timing it with a scope.
To make matters worse, the labs are often extremely prescriptive (with literal step-by-step instructions that guide the student precisely through every task), yet ask complex questions that aren’t explored in lectures or lab material. Sadly, these questions often represent the kernel of what the lab is about — they’re the stuff the student should leave the class understanding.
For example, in their lab reports for the GPIO lab, students are expected to explain why wiggling a GPIO pin in a while() loop does not produce a 50% duty cycle. That’s how the question is framed. This requires them to have a high-order understanding of cycle-by-cycle timing of machine instructions without having ever looking at a disassembly of their C source code, or taught about computer architecture.
I’ve never graded those labs, but I’d imagine I’d see very poor-quality answers from the students.
Revising the Lab Kits
When I started thinking about what I would change about the class, one of the first things to go was the Arduino platform it was taught with.
I honestly have no idea why Arduino is used as much as it is in EE / CS embedded education programs, since it has such poor alignment with most learning outcomes for these classes:
- Difficulty accessing dev environment internals: how do I view the assembly code output from Arduino? Or the hex file that will get programmed? Can I control how variables are placed in RAM, or configure the linker or compiler optimization settings in any manner?
- Difficulty accessing hardware internals: there’s no debugger. It’s insane to teach an embedded systems course with a platform that does not allow you to set breakpoints or inspect or modify memory.
- Strange, non-standard C preprocessor secret sauce: “void main()”? Nah. Call functions without declaring them first? Sure. Wait, what headers are included by default? Who knows. Students leave the course thinking that DDRB, Serial.print() and uint16_t are all reserved words in C that you can use anywhere.
My other (more pragmatic) issue with the Arduino platform is the nonstandard tooling. There are tons of MCUs students will encounter professionally. These generally all work the same: you write software in C, and then you compile it, getting some sort of hex file. You use a debugger or a programmer to communicate with the MCU and load the program code into the flash memory of the microcontroller, and then you run it. Most of the time, you have a debugger attached that can set break points, inspect memory, and receive trace data. The Arduino Uno ecosystem is not representative or similar to any other microcontroller ecosystem, so it seems like a bizarre choice.
With the Arduino out, it was time to go MCU shopping. I approached the course as yet another engineering problem: you need to pick the right part for the job.
Microchip PIC16
For the project-based learning component of the course, I selected the PIC16F18446 + MPLAB Snap debugger. This is not a microcontroller I would generally use in my professional work, so why teach this course around the PIC16?
- DIP package allows students to breadboard their MCU to remove any “black box” concerns that come with dev boards
- Simple peripherals
- Extremely easy-to-read datasheet with step-by-step directions to configure peripherals
- Low-cost ($15) debugger
- Free IDE that runs on Windows, macOS, and Linux
- Decent IO viewer in the IDE that allows students to interact with peripherals (and potentially see why their code isn’t working)
- Toolchain supports bitwise operations with fluent syntax (RB5 = 1 sets pin RB5 high, without touching other GPIO pins)
- Peripheral code-gen tools built into the IDE
- Lots of breakpoints (3? 4?) compared to other PIC parts
I looked at literally every MCU I reviewed in the round-up, and I think I made the best decision. Many deal-breakers to consider:
- The Texas Instruments MSP430 LaunchPad FET doesn’t seem to work in macOS. I find the MSP430 datasheets to be far too cerebral, and the clock architecture is a bit too complicated for beginners.
- The Silicon Labs EFM8 ticks off almost all the checkboxes above, but you can’t do any GPIO operations without enabling the Crossbar, which is confusing for students new to memory-mapped I/O. No DIP package either means I’d have to have students solder SOICs onto adapter boards, or use old-stock C8051 parts.
- The clock system and power gating on most Arm parts is way too complicated for students new to MCUs to understand, so that excludes a huge number of parts.
- Other MCUs often don’t have dev environments for macOS or Linux. Any student serious about embedded systems needs to have a Windows computer (for the CAD software alone), but our department doesn’t have specific computer guidelines, and students get mixed messages from other instructors in the department that work outside of the embedded systems field.
For what it’s worth, the PIC16F18446 is not without its problems:
- Programming and debugging speed is atrocious. This isn’t an MPLAB Snap problem — it’s just the PIC16. Students learned to be a bit more methodical when they were developing: measure twice, debug once.
- Peripheral Pin Select is clunky and error-prone for beginners. My students quickly learned of the spectrum between “easy” and “flexible” — while I like that students can route any function to any pin, there are a few gotchas that can be maddening to figure out. For example, when programming the MCU as an SPI master, you have to route the SCK pin as both an output from the MSSP peripheral, and also as an input back into the MSSP peripheral.
- ANSEL is mis-named. Why would you have to modify a register named “Analog Select” if you want to use a GPIO pin as a digital input? Well, because, “ANSEL” doesn’t really mean “enable analog” — it means “disable the digital input buffer.” At least the PIC16 doesn’t use backwards GPIO data direction values like the AVR.
- Weird interrupt syntax. Students struggled to understand why you had to write “__interrupt()” — which looks like a function — as a modifier to their ISR. Students would often declare a function with this name, and end up with bizarre code compilation errors.
- IO View is broken half the time. If you want to inspect peripheral registers in MPLAB X, IO View seems like a good place to do it. Unfortunately, in my experience, its values often get frozen. There’s nothing worse than students not being able to trust their tools.
New Format
In my class, each lab has an experimental component and a PBL component.
Experimental Component
The experimental component is where students learn how to use a peripheral or microcontroller component. As an example, let’s look at the GPIO lab. I didn’t mind the existing idea — time GPIO values with a scope to infer instruction timing — but we reworked the problem to make it a bit more realistic, while adding more thorough analysis built into the lab.
In the real world, you always know how many cycles it takes for an instruction to execute (it’s clearly printed in the datasheet), but it’s relatively common to make oscillator configuration mistakes that cause your MCU to run at a speed different than you think it should be running at.
As a result, I had my students write C code to toggle a GPIO pin, and then look at the disassembly output and step through the instructions with their debugger. They calculated the number of cycles their code should take to execute, and then compared it with the actual execution time to determine the clock frequency of their microcontroller.
By walking them through each component of this in a prescriptive manner, they practiced reading datasheets to look at instruction timing, calculated how many cycles their loop would take to execute, captured experimental data, and fit their calculations to the data.
PBL Component
Additionally, each lab has a project-based learning (PBL) component, which chips away at a semester-long design project.
I’ve seen a wide variety of embedded projects used in instruction — many of which fall into the category of “small toy robots” — and I wanted to find something that felt like a commercial product that a student would go out and buy. I had students program a commercially-available DMX light, pictured above.
I was inspired by watching Big Clive’s teardown of the light fixture. The light fixture lets students program almost all the important peripherals on their MCU:
- GPIO inputs for push-button switches
- I2C for the 4-digit 7-segment display driver
- Timers/PWM for the main LED array
- ADC for the microphone
- UART for the DMX receiver
These light fixtures are very inexpensive — we bought a class set of them from eBay for something like $7 each.
I did a quick respin of the control board design — swapping out the Nuvton N76 with the same PIC16F18446 we used in the lab kit — but kept everything else the same.
It worked really well. Students were given the light fixtures at the beginning of class and they were essentially paperweights, containing a simple demo firmware image that lit each color when you pressed a button. By the end, they were fully-functional DMX lights — complete with extra features and capabilities the students designed and implemented on their own in the last few weeks of class. One of my students uploaded a YouTube video illustrating how these projected turned out.
There was a lot of other learning that was happening undercover; students had to disassemble and reassemble their light fixture whenever they wanted to program the board. They got into the habit of how everything goes together, and what cable plugs into what. They saw that there’s nothing magical inside a plastic enclosure for a product, and they started building confidence — feeling like they were a lot closer to being able to design this stuff than they ever thought before.
Summative Assessment
I do not use projects/labs for summative assessment. Students need the freedom to work with others (and me and my TA) on the projects since a lot of reinforcement happens while they’re working on things. Plus, projects and labs focus on different orders of learning than the reading assignments do, so if you use the labs for summative assessment, you’re testing students over material they have yet to be formatively assessed on, which is unfair to them, and prevents the instructor from providing learning interventions before the assessment.
As a result, it makes more sense to use the PBL component as additional formative assessment in conjunction with the reading assignments — both covering different areas of learning. If you’re shaky on formative versus summative assessment, all you need to know is that formative assessments serve as diagnostic tools that students and instructors can use to evaluate learning along the way. Summative assessment is used to evaluate achievement.
At the end of the day, I have to assign a grade to each student in my class, and I want that grade to ultimately reflect achievement — not effort.
As a result, I designed several small tests that I administered in class roughly every two weeks. These summative assessments had roughly 10 short-answer and multiple-choice questions on them; they were always conceptual in nature, so students with good understanding of the material could complete them in as much time as it took to write out the answers.
Students who couldn’t complete them immediately generally had much worse scores, so there’s not much point in giving students more than 10-15 minutes to finish. Even so, I allow students as much time as they’d like to complete the assessment, just so students don’t feel unfairly treated or rushed.
Wrapping Things Up
It was an eye-opening year teaching both of the Embedded Systems classes, and we’re about a month away from kicking off the fall semester, where I’ll be teaching my Advanced Embedded Systems class again (which, itself, will be completely different than last time around).
Teaching isn’t complicated; it can be approached just like an engineering problem — broken into small pieces and thought out. Once the pedagogy is working, it’s fun to dial things in and optimize your course. As an example, if I teach my Intro class again, I’m going to be thinking about how I can minimize the amount of one-on-one time students had to spend with me in office hours to be able to work through the material; there’s a lot of JIT intervention papers I’ve been reading, and I have tons of ideas floating around.
I’d love to hear what other instructors who teach Embedded Systems do in their classes — there’s far too few published studies on Embedded Systems education, and few instructors block about this stuff. Let’s get the conversation going!
Hi Jay,
I think you should look at:
TinyFPGA BX Luke Velenty
ICE40LP8K 7680 Logic Cells
tinyfpga.com
PicoRV32 Clifford Wolf
RISC-V Processor IMC instruction set
Open Source: Tools ICE Storm
As for development, you can use any operating system. I use all three: macOS Mojave, Debian 10 Buster, and Windows 10 19H1.
As for languages, Modern C++ in embedded systems Is the way to go.
GCC
Clang
Algorithms
Lambdas
vector
constexpr
Using open source hardware and software is the best solution. It doesn’t limit you.
I have.
Fantastic post! This is the kind of embedded course I wish my university would have had, as I struggle with embedded systems many years later.
The corkscrew method is a great idea. I have had students who take an unknown concept and then chase it down a rabbit hole, when it is not sufficiently explained the first time around. How do you get the students not to obsess over a difficult element when you know it will come back up again in subsequent material?
I would also be interested in some of the educational material you have been reading, if you’re willing to share source.
Hi, I am a student of BTech 4th year and if any projects are available please notify me based on any topic
Related to electronics and instrumentation topics.
You’re obviously intrinsically-motivated enough to seek out information like this on the Internet; why don’t you come up with a project in an area you’re interested in, and work on it? Drop me a line using my contact form on my About page and let me know what you come up with!
I find it interesting that this course is taught at a sophomore level. At the my alma mater University of Utah, embedded systems is a senior level course. It was required for electrical engineer and computer engineering majors, and an elective for computer science. All students had taken analog and digital circuits. CE and CS students had already taken computer architecture and assembly, and C++. Systems C was another senior course usually taken the semester before. CE students had designed a processor from scratch in FPGAs. It was taught in a fairly standard lecture and lab format. Most lectures (at least the ones i remember ) were high level concepts in designing systems – state machines, concurrency and scheduling, hysteresis, etc. The labs provided the actual programming amd hardware experience. I will say that many EE students had difficulty with approaching programming at all. I am always depressed at how poorly most students understand C, even the CS majors at a senior level. Do you feel like an approach like this at an early stage would help students get a foundation in these concepts that would translate into other courses? When the advanced course taught?
Sounds like our CE programs are identical. My department moved up the embedded systems class to second-year so students would get more hands-on experience earlier on in the program. There are good things and bad things about that approach, as I mentioned.
I don’t know that embedded systems provides a foundation to anything that would transfer to other classes — it seems like the knowledge would flow in the other direction. However, I like the idea of introducing students to signal processing and communications systems engineering work in the context of an embedded system, and then working out all the details in later classes. My blog post could have easily been 10 times longer and gone into many more of those details.
As for improving EE programming skills, the fundamental problem I identified is that EE classes emphasize abstract reasoning by often assigning problems that have “tricks” or “shortcuts” that a student with a higher-order understanding of the topic could employ; this is in direct conflict with the idea that mastery of lower-order skills is through intense, time-consuming practice. In other words, your college’s EE students are bad programmers because they don’t spend enough time programming; their embedded curriculum probably saw them write, at most, a few hundred lines of code for a lab. How many labs were there, total? Probably just a handful. These students graduate having written fewer than a thousand lines of code.
I really should have gone into this in my post, but it was important for me to build up a culture of grit and perseverance in my class. One early lecture, I sort of made fun of us EEs for being so lazy (“many School of Music students practice 5 hours or more each night after classes, and that’s excluding group rehearsals. When was the last time you spent 5 hours a night practicing engineering work?” I’d ask them). I spent considerable class time talking about overcoming failure. I’d bring in beautiful $2800 commercial prototype PCBs that mesmerized the students, and then tell them that the boards were glorified paperweights due to a critical routing error I made in the design. We’d talk about spending 20-30 hours diagnosing an intermittent high-level motor control bug on a robot, only to realize it was because of a 6% baud error mismatch. From our discussions, my students know it’s perfectly normal to spend 500+ hours getting a prototype out the door. I don’t want them to think that embedded engineers are super-gurus that wave their wand and solve all the problems in an instant. I want them to value rolling up their sleeves and diving into a problem, instead of trying to be clever. I always try to highlight code written by the students who put in the most amount of time completing the lab and congratulate them on their grit. It’s about building that culture.
From talking to students, it sounds like most people spent 8-20 hours a week working on my labs (which is insanely high for an EE course), but out of the 32 students I had, I didn’t receive a single course evaluation that complained about the workload. Those students were rockstars, and it started with establishing a class culture.
” The Silicon Labs EFM8 ticks off almost all the checkboxes above, but you can’t do any GPIO operations without enabling the Crossbar, which is confusing for students new to memory-mapped I/O. No DIP package either means I’d have to have students solder SOICs onto adapter boards, or use old-stock C8051 parts.”
The DIP issue is fair enough, but the crossbar is just one line of code, thereafter the ports are standard IO – if students are confused by that, they should be in another course ?
The 8051 allows you to INC Port to toggle all 8 bits.
For the very simplest LED flashers(4 assembler lines), you could ignore the watchdog, and then get the students to scope their leds… and they learn watchdogs need to be fed and cared for…
Yeah, in hindsight, maybe the crossbar stuff wouldn’t have been as bad as I anticipated; after all, the PIC16’s ANSEL caused plenty of unanticipated havoc for the students when it came time to do digital inputs. The quasi-bidirectional I/O on the 8051 can make code really elegant, but I would have had to jump into the transistor-level GPIO port analysis way earlier than I did (and probably teach the other I/O structures, too, just to compare), otherwise students won’t understand why they can only blink their LEDs in an active-low configuration. The point I’m trying to make is that I wanted students to be able to start out with a completely blank project, add a source file, add a few lines of code that seemed logical and directly applicable to what they were trying to do (like turn on an LED), and have things work. Ideally, any part I’d use would have the WDT disabled by default, a reasonable clock set-up on reset, no stack pointer / NVIC initialization required, and no pin mapping configuration, and no disabled input buffers on reset. Students will have plenty of time to play with that stuff as the class progresses (and they take more advanced classes), but it’s important to keep it simple! Thanks for the comment!
I went to a vocational high school, where you’d study 4 years to become an electrical technician. In Serbia, back in 2001., lectures on embedded tech were severely crippled by lack of funding and general disinterest for education in a post-socialist dystopia.
In 3rd grade we had a high-level computer programming class. We used to write Pascal and C programs on paper. Exams would be writing out a whole program until time runs out. I loved C more than Pascal, because it took less time to open and close a curly bracket than to write ‘begin’ and ‘end’ in letters. There were no compilers: teachers were compilers. Instead of debug messages you’d get bad grades and red pen over C code you wrote in pencil. There was a computer science lab, but it was mostly old 486 and Pentium machines that students run games on while teachers were out drinking coffee.
In 4th grade we moved to embedded systems. That was consisted of writing 8051 ASM code on paper, and then, once a month, we’d move to CS lab and run our 8051 programs on a DOS-based simulator. Our teacher once mentioned that microcontrollers could also be programmed in a high-level language, but that only “very expensive chips” support this feature, and that we won’t ever need that in our career. Sigh.
By the time school ended, none of us ever held or touched a microcontroller. School didn’t have any programmers or dev boards, not even a stock of chips and breadboard we could practice on. Downloading a HEX file or blinking a LED was a science fiction to majority of my class. I might be old and grumpy, but I think today’s generations have it way too easy. Arduinos, free compilers, wide range of dirt cheap chips and dev tools…
After school, I started working immediately. Never went to uni. Started with general electronics, spending my 20s learning what I missed in school and how to repair things. My first projects were relay logic, where I taught myself things such as what a logic gate is, what’s a latching function, what’s a race condition, and how I could automate machines using 1920s technology: relays, micro-switches, timers, push buttons, etc. After feeling comfortable with that, I moved on to PLCs. These things were very useful; I could rewire my app simply by downloading a new software. Ladder logic is a completely different way of coding, though. Lastly when my PLC apps became too expensive for the market, I moved to microcontrollers. My self-taught experience helped very much: while PLCs in a functional sense were a bunch of microscopic relays, microcontrollers were microscopic PLCs that needed delicate care (regulated supply, decoupling, interface chips…) and very complex setup (most PLCs had proprietary IDEs), while being overwhelmingly cheap.
Sorry if I rambled on too much. If a war broke out today and I was assigned as a teacher, I’d start teaching my students about fuses, contactors, switches and relays. Like, hands-on stuff. Things which explode if you don’t wire them properly. Explosions and lightnings create a long lasting memory. Once they’ve mastered this logic of “binary mechanics” I’d teach them digital circuits, then move on to Turing machines: microcontrollers.
Really Inspiring. I came from India. where I have a similar situation as yours. I decided to teach myself which is unknown to me. and become a life long learner. Today I’m working in PLC programming came here to get more insights on Embedded things.
If you are able to share your materials, please do! I would love to learn from them.
> At least the PIC16 doesn’t use backwards GPIO data direction values like the AVR.
Can you explain what you mean by this? Personally, I find the AVR convention (DDR = 0 means input) more intuitive than the PIC convention (TRIS = 1 means input) because AVR has a reset state of 0, and 0 corresponds to “output driver is deactivated”. But maybe there’s some history that I’m missing?
I’ve reviewed enough MCUs to notice the industry-standard convention is I = Input = 1, O = Output = 0. The fact that the DDRx registers reset to 0 is a consequence of AVR’s backward 0=Input 1=Output scheme, not a cause of it (many AVR registers reset to something other than 0).
I appreciate your perspective–a lot of it resonates. I just wanted to comment that the MSP430G2553 is a good choice for beginners, and the MSP-EXP430G2ET LaunchPad works with the Mac OS as well as PC (the older MSP-EXP430G2 does not work with Mac). I switched to the MSP-EXP430F5529LP LaunchPad to accommodate students with Macs a few years ago, but it had a permanently mounted 80-pin QFP, and a complicated clock system (the MSP430G2553 clock is quite simple). When I discovered the MSP-EXP430G2ET LaunchPad, I immediately switched back to using the 20-pin DIP device so it could be pulled out of the LaunchPad and inserted into breadboards for projects.
If any of your students use Linux with AVR development, they can now use Bloom with avr-gdb to debug their AVR targets. Bloom supports most EDBG-based debug tools (including the MPLAB Snap). Users can use their IDE of choice, provided that it supports the GDB Remote Serial Protocol.
Bloom also provides a GUI with insight of the target’s GPIO pin states – a nice way to visualise the IO state of the target.
Currently, it only supports AVR 8-bit targets, but I’m considering support for SAM Cortex-M targets. If you want to check it out: https://bloom.oscillate.io
I am the author of Bloom. If you do try it out, feel free to get in touch if you have any questions.
Thanks,
Nav
I teach two embedded systems classes that follow the introductory embedded systems class. One uses the same microcontroller as the introductory course (AVR), so the focus can be on C programming, layered software architecture (device drivers-hardware abstraction-application), writing software for reusable and portable code and an introduction to real-time operating systems. I do not have a book for this course and I like the idea of having interactive reading material. Do you have any advice, guidance or resources you would recommend?
The other course I teach uses an ARM processor and it is similar to the introductory course so students can learn the ARM architecture and assembly language and be introduced to the more complex peripherals found on more advanced microcontrollers. I use the book “Embedded Systems with ARM Cortex-M Microcontrollers in Assembly Language and C” by Dr. Yifeng Zhu. The way the chapters are laid out matches well with the way I lecture.
Thanks for the information in this blog and for any advice you can provide.