Announced 2+ years ago (almost 3, now that I look: https://www.espressif.com/en/news/ESP32-C5 ) and sampling 1+ year ago, good to see it finally come. 5GHz support is increasingly important.
Why is 5GHz increasingly important? For most IoT applications, isn't the better wall penetration of 2.4GHz more important than the increased peak speeds of 5GHz?
Some places are wanting to go dual 5 GHz radios on APs (for general client density) rather than 2.4 GHz and 5GHz radios but 2.4 GHz only IoT devices force you into keeping a 2.4 GHz radio infrastructure active. These kinds of environments tend to turn the power down on 2.4 GHz anyways as "goes through walls" can actually be a bad thing for coverage when multiple APs are at play (SNR is more important than raw power).
For a typical consumer home use case continuing to use 2.4 GHz is most likely ideal though. Though some apartment complexes have such bad 2.4 GHz interference even that might not be universal.
YMMV but in my house, with a commercial-grade wifi AP, I found that devices on 5 GHz get much better speed and range due to all the local noise on 2.4 GHz.
Yes and no, depending on the environment. In an apartment building, the wall penetration is a liability, as the 2.4ghz spectrum, with only three channels, gets extremely congested. Going 5ghz helps immensely, with more channels available and less penetration, so you get more spectrum reuse.
It's not the peak speeds, it's the spectrum use. The 2.4GHz ISM band has 100 MHz of available spectrum, the 5GHz wifi spectrum is 740 MHz wide, and is still occupied by fewer deployed devices.
To an IOT application, it's the difference between chatting with a friend at a quiet outdoor cafe and trying to shout at her in a crowded bar.
In my opinion 2.4GHz is rapidly becoming a non-starter. Companies like Eero, TP-Link, and Spectrum are using 40Mhz wide swaths of 2.4GHz for their Mesh backhauls. Sitting in my home office in a single family detached home, I can see 24 different SSIDs all running 40mhz on the 2.4GHz band with 7 having a signal strength greater than -80dBm.
5Ghz doesn't propagate very far and putting IoT devices inside your home on 5Ghz makes a lot of sense. With 6Ghz coming on line and being reserved for high bandwidth applications, 5Ghz for IoT makes even more sense.
Pop open a WiFi scanner sometime. Unless you're living way out in the country, the 2.4GHz spectrum will be pretty much full. Everyone has a 2.4GHz router and you're likely to get a lot of interference.
Really it's the same reason computers moved to 5GHz, and now 6GHz.
The big anti-feature is that developers can block users from flashing the chips.
Yes, there's a security angle, but if I have the chip in my hands, I should be able to flip some pin to reprogram the chip and prevent all the e-waste.
>> The big anti-feature is that developers can block users from flashing the chips.
There's a liability angle too. If a company (or person) makes a product that has any potential for harm and you reprogram it prior to an accident, YOU must take responsibility but will probably not.
Another angle is that the hardware may be cloneable and there's no reason anyone should be able to read out the code and put it into a clone device. There is a valid use case in making a replacement chip for yourself.
Companies will buy far more chips than hobbyists, so this feature caters to them and for valid reasons.
>> Yes, there's a security angle, but if I have the chip in my hands, I should be able to flip some pin to reprogram the chip and prevent all the e-waste.
What if the chip used masked ROM? Your desire is not always feasible. You can always replace the chip with another one - and go write your own software for it </sarcasm>.
BTW I'm a big fan of Free Software and the GPL, but there are places where non-free makes sense too.
Seriously now, where is that? The only scenarios I can think of are devices that could put others at risk. Large vehicles. But even, many countries allow modified vehicles on the road.
But everything else should be game. If it's my device and only me at risk, why should anyone else get a say.
At least the European countries I am aware of, the owners will have a hard time on a police control if the modifications aren't part of the allowed ones by law, and depending on the modification, it is missing from the car documentation.
A major worry I have is: the EU is bringing forth some serious cybersecurity regulations (affecting equipment with radios (WiFi, Bluetooth, ...) as part of the Radio Equipment Directive later this year, soon to affect everything as part of the Cyber Resiliency Act). This enforces some good security practice, but also has a lot of stuff in it that's way easier to comply with if you just say, "the device is locked down with hardware-protected write protection (or Secure Boot)".
To my understanding, there's nothing specifically preventing companies from giving the user the ability to disable write protection or load their own signing keys, but it means that the default will be to have locked-down devices and companies will have to invest extra resources and take extra risks with regard to certification into enabling users to do what they want with the hardware. I predict that the vast majority of companies making random IoT crap won't bother, so it's e-waste.
I am afraid this is a very narrow reading of the CRA. Did you read the act yourself or some qualified opinion by a European lawyer? Security updates are the default demand of CRA and not having them is an exception that requires an assessment of risk (which I would assume mean that it's only viable for devices not directly connected to Internet).
An (equally narrow ;)) quote:
"ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic security updates that are installed within an appropriate timeframe enabled as a default setting, with a clear and easy-to-use opt-out mechanism, through the notification of available updates to users, and the option to temporarily postpone them;"
Thus, I expect RED to stipulate only radio firmware to be locked down to prevent you from unlocking any frequencies but the CRA to require all other software to be updatable to patch vulns.
I have not read the RED or the CRA, nor discussed what they specifically say with a lawyer who has read them. However, I have gone through a recent product R&D process in Europe where the product has WiFi and LTE connectivity, so it falls under the RED (even though WiFi and 4G are handled by off-the-shelf modules). I have read parts of the EN-18031 standards (mostly using their decision trees and descriptions of decision nodes as reference), I've been on a seminar with a Notified Body about what the practical implications of the RED are, I've filled out a huge document going through all the decision trees in 18031 and giving justifications for the specific path through the decision tree applies to our product. I've also discussed the implications of the RED and 18031 with consultants.
I don't doubt you with regard to what the RED and the CRA actually says. However I'm afraid that my understanding of it better reflects the practical real-world implications of companies who just need to go through the certification process.
18031 requires an update mechanism for most products, yes, however it some very stringent requirements for it to be considered a Secure Update Mechanism. I sadly don't have the 18031 standard anymore so I can't look up the specific decision nodes, but I know for sure that allowing anyone with physical access to just flash the product with new unsigned firmware would not count as a Secure Update Mechanism (I think unless you can justify that the operational environment of the product ensures that no unauthorized person has physical access to the device, or something like that).
EDIT: And I wanted to add, in one common use case for microcontrollers, namely as one part of a larger product with some SoC running Linux being the main application processor and with MCUs handling specific tasks, you can easily get a PASS in all the EN-18031 decision trees without an upgrade mechanism for the MCUs themselves. In such products, I can imagine a company deciding that it's easier to just permanently lock down the MCU with a write protect than to justify leaving it writeable.
Thank you, an interesting (and somewhat sad) perspective. Would be unfortunate if these two regulations combined result in less firmware update capabilities not more.
Yeah, it's sad. I can say with certainty that there are products whose developers would have decided to leave MCUs and/or SoMs writeable based on analysing the threat model, but where the rigid decision trees in EN-18031 around secure storage mechanisms and secure update mechanisms makes that too difficult to justify.
How much chip re-flashing / re-use is being done at the moment? I'm not convinced e-waste is repurposed in any kind at any scale... although it's an interesting premise if electronics are more modular and can easily be disassembled, and e.g. millions of esp32 based chips end up on the secondhand market.
Quite a bit in IoT market, especially in the low-mid tier and Chinese imports. There’s an entire ecosystem of custom OS’s for home automation that runs on these ESPs, ESPHome for example. I have flashed quite a few smart sockets to run Matt/kafka messaging client rather than unknown vendors software that has an open socket to offshore ips.
From what I've seen very little. I _think_ it's something to do with the kind of people who work on embedded systems generally wanting the freedom that comes with making things from scratch resulting in not that much interest in repurposing old things outside of making them work with their overall IoT network.
a. to salvage the microcontroller and other relevant parts (probably worth $4 off a board that would cost $100+ to replicate)
b. a weirdly hostile attitude about the ethics of reverse engineering regardless of the motives (guessing people have been burned a lot with people stealing their designs)
I've mostly worked on the frontend and don't have much knowledge of embedded systems at all but it wasn't anywhere near as hard as I expected. Keen to find some other ESP32 devices to tweak (suggestions welcome!). I guess even if making them unflashable becomes the norm it won't be too hard to just swap the ESP32 off the board with a new one.
The ESP32 (and other ESP chips) are somewhat common in smart home/IoT gear, particularly for devices that are dependent on a cloud service to function. There's a growing trend in the smart home community of re-flashing cloud-dependent ESP32 based hardware with ESPHome, which makes the device fully local controlled, eliminating the risk of the cloud service being discontinued/enshitified.
It's not a particularly common thing yet, but smart home enthusiasts are becoming increasingly concerned about the expense and effort required to replace cloud-dependent hardware because the manufacturer decided the cloud service isn't worth maintaining anymore.
How is this different in practice to the regular ESP32's secure boot, where you can technically flash the chip with whatever you like but unless you have the signing key the bootloader will refuse to load it?
You can generate the keys on-device during the initial provisioning and have it encrypt the flash with that key, so every device generates its own unique key and there isn't any practical way to extract it; even the developer can't flash it directly, and OTAs are required to update the firmware. This effectively means nobody can flash the chip anyway since you can't know the keys. Is there some sort of attack vector here I'm missing that gets mitigated by preventing flashing entirely?
If the developers really wanted to you have the key, they could just write the per device unique key in the box or on the pcb. What you’re suggesting is more or less possible, the problem is you represent a very niche case. 99.9% of consumers don’t want to reprogram a chip already soldered to a board and it’s not worth the time catering to them. Also some IOT devices are left in physically insecure places like on the exterior of the home, you’d never want some to be able to extract the firmware key or re-flash those devices.
This is explicitly for IoT manufacturers that want to lock out people easily modifying a device (or bringing it back to life after it receives an "update of death") or the kinds of industrial customers that need to check a box for a cybersecurity audit.
These IoT manufacturers keep making all of these new products but the thing is, an ESP32 from several years ago is not that much different than one from today. They don't need much compute, anything difficult can take place on the cloud. So how do you sell someone new hardware if the first gen device is still perfectly capable? How do you sell a premium version if it's just the same parts inside? For the former, you can EoL a product by blocking it from cloud services (like Nest this week). If the firmware is locked, a hobbyist can't just flash modified gen 2 firmware and have the device functioning like normal. For the latter, you can lock the bootloader firmware so that it will only load the firmware that you want it to run (i.e. the basic or premium version).
When you say “this is explicitly for iot manufacturers…” are you referring to secure boot? That’s what I was referring to. I’ve done embedded development for about a decade, 6 at an IOT company, and our main motivation for using secure boot was to keep our firmware secure. The last thing we want is someone writing an article on the internet about how with this one easy trick you can break the security of the device and do whatever you want ( the devices are related to access control). If the company went out of business we’d have the option of publishing the signing key but it’d render all the devices vulnerable to malicious OTAs. Point is we’re not trying to lock folks out of tinkering, we’re trying to keep the devices secure. I understand as a side effect it means you can’t flash the device to whatever you want.
Also for what it’s worth these ESP chips are unbelievably cheap when bought at scale. The box the product comes in is probably more expensive
E-waste prevention is hella important. It's a tough situation though. I think for a board that goes out in the public, even at a prototyping level, it's important to know the chip you have is not tampered with. I once wanted to make a little people counter at a university campus with an ESP8266. I simply couldn't make sure it's resilient against some CS students poking at it.
It takes little skill, 15USD soldering iron to solder 4-5 through holes wires, connect 10 usd programmer and flash a new firmware.
Investing in a (de)soldering station, with the risk of pulling all the neighboring components, breadboard/something to plugin your new controller or memory into for programming and what not? I’m not sure if I’d go through that trouble
Yes, but this I think is a bit sad. Once you have proper hard to crack security a smart kid from CS won't find it hidden decide to hack it and write blog post about it how he has done it. As long as it is not something dangerous sometimes having less security is better
Not really, if the evil maid is sophisticated enough to bring their own firmware to reflash your devices, they could also just swap the PCB containing the controller or solder in a new chip
I chose the word counters rather than eliminate for the reason you outlined.
If you're the vendor, you can add a tamper-resistant or tamper-evident design to raise the cost of ,component-replacement attacks. Which can be countered by whole-device replacement, which in turn is countered by device identity attestation, amd so on, in an endless arms-race.
Tamper-evident stuff and device attestation solve all these problems even without preventing reflashing. If you can't check if the device has been tampered with or replaced, preventing flashing won't help. If you can, you don't need to.
Most IoT devices implement security and integrity using one time burnable registers (more importantly for keys).
It's sad but yes, those devices are permanently bound to the vendor.
There is no real alternative though, a TPM based approach makes it more complex and is another closed system.
De-soldering an MCU and soldering on a new oneis typically far from trivial. We're typically not talking about dedicated big through-hole flash chips here. We're often talking about MCUs with integrated flash memory which are surface-mounted and often with pins on the underside.
In general, I find it easier to desolder and replace surface mounted parts. With those, you just have to hit it with some hot air, it melts all the solder at once, and lifts right away. The chips are small, so it's not too bad to heat the whole thing evenly with a small air gun.
Through hole parts need a lot more heat across a bigger area, or you have to go pin by pin. I've scorched many a through hole board trying to desolder something, cursing at those who didn't socket the chip in the first place.
Want to annoy a repair person? Pot the whole thing in epoxy.
The saving grace here is that ESPs have radios and require certification which is expensive. As a result it is common for vendors to use pre-certified modules that are provided on a modular PCB with castellated half-holes. These are relatively easy to desolder.
Not to mention the horrific peak power draw. It took people a while to figure out that the things need a fair bit of close-by capacitance on the power rail or they crash.
The annoying thing is this is only due to "calibration" which can, with some highly esoteric optimisations, be skipped. Depending on the application , this can realise massive gains in peak power, wakeup latency, and even average power. The whole process is hidden in a binary blob though and espressif will not elaborate on it, so it's very challenging to alter.
220uf would be the minimum. I've had modules get stuck in a boot loop when they have less then 120uf (inferred by repairing switch modules with bad capacitors).
It depends entirely on your power supply and layout. As evidenced by the fact that there are literally millions of working ESP32s out there in the wild with far less than 220uF capacitance on 3V3.
A cr2032 has somewhere around 200mAh. The low-power ESP32-C3 uses somewhere around 20mA when its radios are off but the CPU is running and peripheral clocks are running, which gives a roughly 10 hour runtime. This calculation ignores the fact that the battery's voltage will drop below where the ESP browns out before it has delivered all its power (or assumes that it's regulated up at perfect efficiency), but most cr2032 batteries are a bit above 200mAh so it probably mostly evens out.
Though note that for most of its life, the cr2032 will deliver slightly below the ESP's minimum spec of 3V. From experience, that's not really an issue.
If you can get away with spending most of your time in the ESP's deep sleep state however, battery life is gonna be way better, and that's probably what you'd want to do if you're using a cr2032. In deep sleep, the ESP32-C3's data sheet says it consumes around 5µA (0.005mA). With 200mAh, this gives a battery life of 40 000 hours, or 4.5 years. At those time scales, battery self-discharge becomes relevant, so let's say a couple of years.
So a decent mental model is: you can do a few hours of cumulative compute over the course of a couple of years.
Unless you decide to boost the voltage of the cr2032 to be within the ESP's spec. In that case, the whole deep sleep discussion might be moot; I suspect the regulator's own power draw would absolutely dominate the 5µA of the ESP. But I'm not super familiar with the world of power regulators, maybe there are ultra low power regulators which can do the job with minimal losses even in the µA regime.
You might be able to do some cleverness with dc-dc boosting the voltage to charge a capacitor, then turning off the dc-dc converter until the capacitor is discharged. I haven't checked the numbers to see if that's workable though (capacitor leakage? does converter start-up use extra power?)
But your examples are all with the radio off. So then there is basically no point in using the ESP32. Better to use an MCU without radio and lower power consumption.
Well there's the use case of a device which gathers data and then uses its radio once every handful of hours to report that data to some service. If you're constantly using the radio you're gonna use way more power than my numbers of course.
I also don't think it's too unreasonable to use a C3 as an MCU in settings where a radio isn't required. The IC itself (aka not the wroom module etc) isn't that much more expensive than equivalently specced MCUs without a radio, and if you're already more familiar with developing for the ESP software ecosystem it might well be worth the few pennies extra. The ESP32-C3FH4X (so the IC with 4MB onboard flash, super easy to integrate into in a design) only costs $1.6 per unit on LCSC for 100 units (and mouser etc is similar).
This is looking pretty great, I've really wanted a MCU with Zigbee on it, for the various little battery-operated devices I've wanted to make. However, with Espressif's lineup, I've really lost track of what does what, lately.
Digikey has the modules for under 4 EUR in unit quantities, but they aren't the friendliest to integrate, since they only have pads on the bottom.
I also found some boards with the bare chip for just over 4 EUR there, you can also find similar ones on AliExpress.
I'd like to know that too, have been considering doing some zigbee tinkering, and battery powered would be a requirement. I've read in some other comment that nRF would be much better in that regard. Need to do some googling for numbers...
All MCUs with 802.15.4 radios (STM32WB, nRF5x, ESP32, Ambiq, etc.) can transmit and receive Zigbee frames. The real issue with Zigbee is full software support.
Would an esp32 be the best soc for LoRa? I don't need WiFi or BT, which I know I can turn off to save power. Contemplating trying STM32 instead, don't have experience programming it yet.
No. All you would need is a SPI interface or whatever the Lora module speaks. The most basic microcontroller can do that. Any ESP32 is overkill for this.
Right. Sorry I meant development boards. This is low volume, probably will only make a few a year. The nice thing about the esp32 is I can get the board with lora/display built in, with a battery, for $25 each. The stm32 board and module is only a tiny bit less, for example. If I wanted to I could do this with a cheap 8bit microcontroller, I would just have to design a custom board with an oscillator to modulate the IR LEDs etc. I was gonna do that with PWM from the microcontroller to "simplify" things.
Edit: oh the MSP430 is neat! If I cared about battery life (driving 200ma of LEDs anyway...) I'd totally use that.
The heltech boards are pretty popular, also have some nice stuff like a screen and a lipo charge controller integrated. They’re really popular with the Meshtastic community.
>Espressif Systems (SSE: 688018.SH) announced ESP32-C5, the industry’s first RISC-V SoC that supports 2.4 GHz and 5 GHz dual-band Wi-Fi 6, along with Bluetooth 5 (LE) and IEEE 802.15.4 (Zigbee, Thread) connectivity. Today, we are glad to announce that ESP32-C5 is now in mass production.
It's a big plus if you want to write code for it in something like Rust. LLVM support for the architecture they used on their older chips (xtensa) for a very long time required compiling a fork of LLVM and rustc in order to target the chips. It may still, I didn't keep up with the effort to upstream that target. RISC-V is an open architecture that has a lot of people excited so compiler support for it is very good. Though as far as why Espressif is using it, it feels likely they would use it because it means they don't have to pay anyone any royalties for the ISA.
Better compiler support for RISC-V, but everything I've seen from them is a much shorter pipeline than the older Xtensa cores, so flash cache misses hit it harder.
Both RISC-V and Xtensa suffer from the lack of an ALU carry bit for the purposes of improving pipelining. But for these small cores it means 64-bit integer math usually takes a few more cycles than a Cortex-M Arm chip
But that also depends on what you use it for. If you're after the wifi and IO and other nice things for a mostly idle device - the pipeline is almost irrelevant. Esphome can run on older versions just fine too. On the other hand if you're doing something very optimised and need tight timing around interrupts to drive external hardware - it may matter a lot.
I also found splitting interrupts between the two cores helps with latency, but even if one core has only a single interrupt, that interrupt latency is increased compared to a single core system with a single interrupt. I suspect this is at least partly because they only put a single fetch pipe between the instruction cache and the crossbar.
There's definitely a trade-off between things that seem relatively simple to ISA but can really complicate the pipeline.
Xtensa pays for it with crippled 64-bit performance, which has a lot of downstream impacts. Ex: division by a constant is also slower. Most compilers don't even bother fast pathing 64-bit division by a constant.
I was surprised to find Apple kept ADC/ADCS in aarch64. Maybe this ends up being one of those things that's less useful or potentially a bottleneck depending on the specific implementation. Edit: backwards compatibility probably.
The fact that a few cores have bolted it on to RISC-V makes me think I must not be alone in missing it.
Any chance you could explain this to somebody who is just learning about HID and has run this example: https://github.com/espressif/esp-idf/tree/master/examples/pe... ? "non-boot protocol" I'm guessing is the key here? I don't have a super deep understanding of HID or what the "boot-protocol" refers to.
The USB HID protocol is designed to support basically any device that regularly reports a set of values; those values can represent which keys are pressed, how a mouse has moved, how a joystick is positioned, etc. Now, different devices have different things that they support: joysticks have varying numbers of axes, mice have different sets of buttons, some keyboards have dials on them, etc. So, there's no single format for a report that simultaneously efficiently uses bandwidth and supports all the things a human interface device might do. To solve this, the HID protocol specifies that the host can request a "report descriptor" that specifies the format and meaning of the status reports. This is great for complex devices running a full OS; there's plenty of memory and processing power to handle those varying formats. However, these HID devices needed to also work in very limited environments: a real mode BIOS, microcontroller, etc. So, for certain classes of device such as keyboards and mice, there is a standard but limited report format called the "boot protocol". IIRC, the keyboard version has space to list 6 keys that are pressed simultaneously (plus modifiers), all of which must be from the same table of keys in the spec, and the mouse has an dX and dY field plus a bitfield for up to 8 buttons (four of which are the various ways you can scroll). To implement a more complex device, you'd want to be able to specify your own report format, which the ESP driver doesn't seem to allow you to do.
So your original comment / request was regarding USB specifically then?
I ask because I'd have guessed (possibly incorrectly!) that implement HID via GATT (BLE) you'd be able to support anything the BLE hardware revision could implement?
Perhaps the disconnect for me is that it's unclear when there is some special hardware that exists within the ESP32 itself (I think I2C, I2S, etc would be examples of this) vs something you are just implementing by manipulating the IO pins. Perhaps HID is one of those things?
Maybe. If you have Linux the command `lsusb -v` gives you a verbose breakdown of the attached USB devices, if you find your keyboard it will show which interfaces it provides (a USB device can offer several if it wants) and to work at boot you want:
interface class 3 (a Human Interface Device) sub class 1 (Boot protocol or Boot interface)
In contrast the sub class 0 of HID is just the ordinary case, which is arbitrarily complicated (six thousand keys and four axis input? Why not) and so understandably a BIOS or similar environment might not implement that but a full blown OS usually does.
Doubtless tools exist for other platforms which can show the same information as lsusb
I'll give you my anecdote. I'm building a device that reads the input of a USB game controller. In my case, it's a Sim Steering Wheel. I ended up needing to incorporate a MAX3421e USB Host chip to read the HID input, because the ESP firmware doesn't have this implemented. Hardware wise, all ESP32 chips with hardware USB could do this, but they haven't prioritized it in software. Some keyboards and Mice use a protocol called "boot protocol", and you can get those to work. It's not very common in game controllers though.
Any guesses as to when a hobbyist might be able to buy the module without the dev board? Their aliexpress store didn't have them as far as I can tell, I assume they are prioritizing dev boards for the moment unless you're a big enough company to actually talk directly with Espressif.
Thanks for the link, but yeah as the other poster mentioned, this is for a dev board. I'd be interested in buying the module (which is somewhere between the bare IC and the dev board). Here is an example from their store of an ESP32-S3 module: https://www.aliexpress.com/item/1005006334720108.html?pdp_np...
Professionally I use the esp8266 a bunch because it's still cheapest - but the lack of 5Ghz is really starting to bite as customers complain it 'doesnt work'
Honestly, shipping stuff direct from China and paying more for it here always trips me out, but it's kinda the norm now. You think it'll get easier to actually repair or reprogram stuff or are we just stuck tossing hardware once it's locked down?
Too bad after tariffs, this little guy would have cost $10-15 and now will probably be $50 +
I feel like the Trump admin is going to have to make a carve out for the esp32 or certain Espressif products. So many IOT businesses going to go out of business if these MCs baloon in price.
Alright, I figured that would have to be the case. Do you know any resources that break down what is exempt and what is not, without having to look up the individual HTS codes?
If it's like the other ESP32s with PSRAM support then 2-8MB most likely. IIRC it is addressed in the same way as the NAND, so the more RAM the less NAND you can have.
Maybe not applicable for this new one, but that's my understanding for the S3/C5 models. (something like 16mb NAND and 8mb PSRAM)
I was able to find a preliminary datasheet on Google, it looks like it has 2x CAN (called TWAI in the datasheet). I can't find info on whether it has an FPU or not
Why do I have this sickening feeling that in a few years anyone doing anything with hardware is going to be ordering everything direct from China, like we're some kind of undeveloped client state?
You are already living in this reality. And it has already been happening for quite a few years by now.
People that are especially vocal and badly hit by tarrifs at the moment, are the people who have already been doing just that.
This transition happened so quickly that most people haven't fully cought up to the implications to the full extent.
In my mind, China is already the center of gravity.
We've still got local US distributors though, regardless of everything being made in China. Like if you decide you need something tomorrow, you can go on Amazon and get most things pretty quick (despite overpaying 1.5-2x compared to Aliexpress). And there's a whole cottage industry of 3d printing shops selling canned solutions to people who don't want to hunt Aliexpress themselves.
It's been well over a decade since I was doing embedded design professionally, so my perspective is coming more from a hobby/3d printing/"maker" place. But it feels like one of the main results of these tariffs is that the bottom is going to drop out on Chinese and Chinese-adjacent sellers preloading so much stuff into US warehouses ahead of sale, and instead just shipping orders direct from China. Using a US warehouse means the seller has to front the money for the tariffs as well and takes a risk of them being lowered depending on Krasnov's whims. Whereas shipping direct from China, even if the seller is handling the tariffs (eg Aliexpress Choice), they've already got the cash in hand from a confirmed purchase.
I was just being facetious about the general pain we're feeling from the US's new tariff-based national sales tax. And I haven't been following any reciprocal actions for what is now expensive to get into China. Are -C5's only made in Taiwan or something?
Ah, well that's handy to know. For this week, at least. I've just been watching the 3d printer parts I just squeaked in under the de minimis wire triple in price due to the new taxes.
They link to a $15 developer board on Aliexpress (much the same as the rest of the ESP developer boards floating around for years) which is now inflated to $35 with tax, shipping, and tariff.
My impulse purchase has been tempered with "eh, do I really need it?"
These are ones actually made by Espressif and limit is one per person (presumably supply issues as they ramp up mass production), certainly there will be dozens of clones soon.
Fortunately it’s only £16.40 with VAT and shipping to the UK. Approx $21.85. Comparable to the £9 M5Stack AtomS3 Lite (ESP32-S3) I picked up from Pi Hut recently.
The really criminal thing is it only costs $8.43 to mail that thing from China to your house in the USA... it likely would cost you more to mail that same item to yourself from yourself.
That alone puts US-based sellers at a mega disadvantage compared to cheap Chinese goods - and it's not a good thing.
Most of my Aliexpress electronics orders are shipped to a local US Aliexpress distribution center which then mails them locally. These come in small padded envelopes which are not expensive to ship.
>The really criminal thing is it only costs $8.43 to mail that thing from China to your house in the USA... it likely would cost you more to mail that same item to yourself from yourself.
These things are tiny and very cheap to ship. I could probably pack 40 of them into a USPS flat rate box shipped anywhere in the US for $9.30.
This was a decent argument when you could get things shipped from China for $0.50, but not now or in this case.
Yeah if I were mailing a single one of these to myself I bet I could get away dropping it in a regular envelope with maybe an extra stamp. (Assuming they come without the headers soldered on like most of the clones I’ve got from AliExpress, RIP).
No? Performance is implementation specific, they’re usually cheaper than ARM since there’s no core ISA license overhead, and while the core instruction set being extremely limited does cause a little bit of tension in compiler land, most cores with a baseline set of extensions get reasonable code generation these days.
One of the main reasons RISC-V is gaining popularity is that companies can implement their own cores (or buy cheaper IP cores than from ARM) and take advantage of existing optimizing compilers. Espressif are actually a perfect example; the core they used before (Xtensa) was esoteric and poorly supported and switching to RISC-V gives them better toolchain support right out of the gate.
You are really only correct in your last point as the advantage of RISC-V is to the company implementing their own core, not to the end user.
The reason is that CPU cores only form a tiny part of the SOC, the rest of the SOC is proprietary and likely to be documented to whatever level the company needs and the rest if available hidden under layers of NDA's. Just because the ISA is open source does not mean you know anything about the rest of the chip.
saying that, the C5 is a nice SOC, and it is nice that we have some competition to ARM.
If anyone from Espressif seeing this, I love your MCUs. But can you please improve the ESP-IDF so that it's usable on BSD systems. The Linuxisms baked into its build system is unnecessary.
I think moving from Make in the old version of IDF to CMake was a mistake.
Love it or hate it, CMake is more or less the de facto build system for C/C++
And just like any build system for everything language/stack, there is a small group of hardcore "enthusiasts" who create and push their true build tech to rule them all and then there is the large majority of people who have to deal with it and just want to build the damn thing.
I didn't mean to sound like a hard-core BSD enthusiast, sorry. I was just very frustrated when they moved to cmake in their newer IDF, they added useless things that excluded BSDs. In its current state, it's untenable to patch it to make it work. This wasn't caused by the use of cmake. They could've moved to cmake, but done so with other OSes in mind, esp since they're already in our neighbourhood.
It should generally be easier to make a CMake buildsystem work well on the BSDs than hand-cobbled Makefiles, in terms of opportunities to introduce Linuxisms.
I wasn't really clear there. The Linux-specific stuff wasn't caused by Cmake. They are two independent things that happened as part of the same upgrade.
Considering that they are supporting Linux, there was no real reason to make it so Linux-specific that all other Unix-like systems got excluded.
Announced 2+ years ago (almost 3, now that I look: https://www.espressif.com/en/news/ESP32-C5 ) and sampling 1+ year ago, good to see it finally come. 5GHz support is increasingly important.
Why is 5GHz increasingly important? For most IoT applications, isn't the better wall penetration of 2.4GHz more important than the increased peak speeds of 5GHz?
Some places are wanting to go dual 5 GHz radios on APs (for general client density) rather than 2.4 GHz and 5GHz radios but 2.4 GHz only IoT devices force you into keeping a 2.4 GHz radio infrastructure active. These kinds of environments tend to turn the power down on 2.4 GHz anyways as "goes through walls" can actually be a bad thing for coverage when multiple APs are at play (SNR is more important than raw power).
For a typical consumer home use case continuing to use 2.4 GHz is most likely ideal though. Though some apartment complexes have such bad 2.4 GHz interference even that might not be universal.
YMMV but in my house, with a commercial-grade wifi AP, I found that devices on 5 GHz get much better speed and range due to all the local noise on 2.4 GHz.
Yes and no, depending on the environment. In an apartment building, the wall penetration is a liability, as the 2.4ghz spectrum, with only three channels, gets extremely congested. Going 5ghz helps immensely, with more channels available and less penetration, so you get more spectrum reuse.
It's not the peak speeds, it's the spectrum use. The 2.4GHz ISM band has 100 MHz of available spectrum, the 5GHz wifi spectrum is 740 MHz wide, and is still occupied by fewer deployed devices.
To an IOT application, it's the difference between chatting with a friend at a quiet outdoor cafe and trying to shout at her in a crowded bar.
In my opinion 2.4GHz is rapidly becoming a non-starter. Companies like Eero, TP-Link, and Spectrum are using 40Mhz wide swaths of 2.4GHz for their Mesh backhauls. Sitting in my home office in a single family detached home, I can see 24 different SSIDs all running 40mhz on the 2.4GHz band with 7 having a signal strength greater than -80dBm.
5Ghz doesn't propagate very far and putting IoT devices inside your home on 5Ghz makes a lot of sense. With 6Ghz coming on line and being reserved for high bandwidth applications, 5Ghz for IoT makes even more sense.
Pop open a WiFi scanner sometime. Unless you're living way out in the country, the 2.4GHz spectrum will be pretty much full. Everyone has a 2.4GHz router and you're likely to get a lot of interference.
Really it's the same reason computers moved to 5GHz, and now 6GHz.
The big anti-feature is that developers can block users from flashing the chips.
Yes, there's a security angle, but if I have the chip in my hands, I should be able to flip some pin to reprogram the chip and prevent all the e-waste.
>> The big anti-feature is that developers can block users from flashing the chips.
There's a liability angle too. If a company (or person) makes a product that has any potential for harm and you reprogram it prior to an accident, YOU must take responsibility but will probably not.
Another angle is that the hardware may be cloneable and there's no reason anyone should be able to read out the code and put it into a clone device. There is a valid use case in making a replacement chip for yourself.
Companies will buy far more chips than hobbyists, so this feature caters to them and for valid reasons.
>> Yes, there's a security angle, but if I have the chip in my hands, I should be able to flip some pin to reprogram the chip and prevent all the e-waste.
What if the chip used masked ROM? Your desire is not always feasible. You can always replace the chip with another one - and go write your own software for it </sarcasm>.
BTW I'm a big fan of Free Software and the GPL, but there are places where non-free makes sense too.
> there are places where non-free makes sense too
Seriously now, where is that? The only scenarios I can think of are devices that could put others at risk. Large vehicles. But even, many countries allow modified vehicles on the road.
But everything else should be game. If it's my device and only me at risk, why should anyone else get a say.
At least the European countries I am aware of, the owners will have a hard time on a police control if the modifications aren't part of the allowed ones by law, and depending on the modification, it is missing from the car documentation.
A major worry I have is: the EU is bringing forth some serious cybersecurity regulations (affecting equipment with radios (WiFi, Bluetooth, ...) as part of the Radio Equipment Directive later this year, soon to affect everything as part of the Cyber Resiliency Act). This enforces some good security practice, but also has a lot of stuff in it that's way easier to comply with if you just say, "the device is locked down with hardware-protected write protection (or Secure Boot)".
To my understanding, there's nothing specifically preventing companies from giving the user the ability to disable write protection or load their own signing keys, but it means that the default will be to have locked-down devices and companies will have to invest extra resources and take extra risks with regard to certification into enabling users to do what they want with the hardware. I predict that the vast majority of companies making random IoT crap won't bother, so it's e-waste.
I am afraid this is a very narrow reading of the CRA. Did you read the act yourself or some qualified opinion by a European lawyer? Security updates are the default demand of CRA and not having them is an exception that requires an assessment of risk (which I would assume mean that it's only viable for devices not directly connected to Internet).
An (equally narrow ;)) quote:
"ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic security updates that are installed within an appropriate timeframe enabled as a default setting, with a clear and easy-to-use opt-out mechanism, through the notification of available updates to users, and the option to temporarily postpone them;"
Thus, I expect RED to stipulate only radio firmware to be locked down to prevent you from unlocking any frequencies but the CRA to require all other software to be updatable to patch vulns.
I have not read the RED or the CRA, nor discussed what they specifically say with a lawyer who has read them. However, I have gone through a recent product R&D process in Europe where the product has WiFi and LTE connectivity, so it falls under the RED (even though WiFi and 4G are handled by off-the-shelf modules). I have read parts of the EN-18031 standards (mostly using their decision trees and descriptions of decision nodes as reference), I've been on a seminar with a Notified Body about what the practical implications of the RED are, I've filled out a huge document going through all the decision trees in 18031 and giving justifications for the specific path through the decision tree applies to our product. I've also discussed the implications of the RED and 18031 with consultants.
I don't doubt you with regard to what the RED and the CRA actually says. However I'm afraid that my understanding of it better reflects the practical real-world implications of companies who just need to go through the certification process.
18031 requires an update mechanism for most products, yes, however it some very stringent requirements for it to be considered a Secure Update Mechanism. I sadly don't have the 18031 standard anymore so I can't look up the specific decision nodes, but I know for sure that allowing anyone with physical access to just flash the product with new unsigned firmware would not count as a Secure Update Mechanism (I think unless you can justify that the operational environment of the product ensures that no unauthorized person has physical access to the device, or something like that).
EDIT: And I wanted to add, in one common use case for microcontrollers, namely as one part of a larger product with some SoC running Linux being the main application processor and with MCUs handling specific tasks, you can easily get a PASS in all the EN-18031 decision trees without an upgrade mechanism for the MCUs themselves. In such products, I can imagine a company deciding that it's easier to just permanently lock down the MCU with a write protect than to justify leaving it writeable.
Thank you, an interesting (and somewhat sad) perspective. Would be unfortunate if these two regulations combined result in less firmware update capabilities not more.
Yeah, it's sad. I can say with certainty that there are products whose developers would have decided to leave MCUs and/or SoMs writeable based on analysing the threat model, but where the rigid decision trees in EN-18031 around secure storage mechanisms and secure update mechanisms makes that too difficult to justify.
How much chip re-flashing / re-use is being done at the moment? I'm not convinced e-waste is repurposed in any kind at any scale... although it's an interesting premise if electronics are more modular and can easily be disassembled, and e.g. millions of esp32 based chips end up on the secondhand market.
Quite a bit in IoT market, especially in the low-mid tier and Chinese imports. There’s an entire ecosystem of custom OS’s for home automation that runs on these ESPs, ESPHome for example. I have flashed quite a few smart sockets to run Matt/kafka messaging client rather than unknown vendors software that has an open socket to offshore ips.
From what I've seen very little. I _think_ it's something to do with the kind of people who work on embedded systems generally wanting the freedom that comes with making things from scratch resulting in not that much interest in repurposing old things outside of making them work with their overall IoT network.
I recently reverse engineered an e-waste STEM toy from scratch ( https://github.com/padraigfl/awesome-arcade-coder ) and the general response I got from places were:
a. to salvage the microcontroller and other relevant parts (probably worth $4 off a board that would cost $100+ to replicate)
b. a weirdly hostile attitude about the ethics of reverse engineering regardless of the motives (guessing people have been burned a lot with people stealing their designs)
I've mostly worked on the frontend and don't have much knowledge of embedded systems at all but it wasn't anywhere near as hard as I expected. Keen to find some other ESP32 devices to tweak (suggestions welcome!). I guess even if making them unflashable becomes the norm it won't be too hard to just swap the ESP32 off the board with a new one.
The ESP32 (and other ESP chips) are somewhat common in smart home/IoT gear, particularly for devices that are dependent on a cloud service to function. There's a growing trend in the smart home community of re-flashing cloud-dependent ESP32 based hardware with ESPHome, which makes the device fully local controlled, eliminating the risk of the cloud service being discontinued/enshitified.
It's not a particularly common thing yet, but smart home enthusiasts are becoming increasingly concerned about the expense and effort required to replace cloud-dependent hardware because the manufacturer decided the cloud service isn't worth maintaining anymore.
How is this different in practice to the regular ESP32's secure boot, where you can technically flash the chip with whatever you like but unless you have the signing key the bootloader will refuse to load it?
If the key is stolen, you can still protect your hardware.
You can generate the keys on-device during the initial provisioning and have it encrypt the flash with that key, so every device generates its own unique key and there isn't any practical way to extract it; even the developer can't flash it directly, and OTAs are required to update the firmware. This effectively means nobody can flash the chip anyway since you can't know the keys. Is there some sort of attack vector here I'm missing that gets mitigated by preventing flashing entirely?
If the developers really wanted to you have the key, they could just write the per device unique key in the box or on the pcb. What you’re suggesting is more or less possible, the problem is you represent a very niche case. 99.9% of consumers don’t want to reprogram a chip already soldered to a board and it’s not worth the time catering to them. Also some IOT devices are left in physically insecure places like on the exterior of the home, you’d never want some to be able to extract the firmware key or re-flash those devices.
This is explicitly for IoT manufacturers that want to lock out people easily modifying a device (or bringing it back to life after it receives an "update of death") or the kinds of industrial customers that need to check a box for a cybersecurity audit.
These IoT manufacturers keep making all of these new products but the thing is, an ESP32 from several years ago is not that much different than one from today. They don't need much compute, anything difficult can take place on the cloud. So how do you sell someone new hardware if the first gen device is still perfectly capable? How do you sell a premium version if it's just the same parts inside? For the former, you can EoL a product by blocking it from cloud services (like Nest this week). If the firmware is locked, a hobbyist can't just flash modified gen 2 firmware and have the device functioning like normal. For the latter, you can lock the bootloader firmware so that it will only load the firmware that you want it to run (i.e. the basic or premium version).
When you say “this is explicitly for iot manufacturers…” are you referring to secure boot? That’s what I was referring to. I’ve done embedded development for about a decade, 6 at an IOT company, and our main motivation for using secure boot was to keep our firmware secure. The last thing we want is someone writing an article on the internet about how with this one easy trick you can break the security of the device and do whatever you want ( the devices are related to access control). If the company went out of business we’d have the option of publishing the signing key but it’d render all the devices vulnerable to malicious OTAs. Point is we’re not trying to lock folks out of tinkering, we’re trying to keep the devices secure. I understand as a side effect it means you can’t flash the device to whatever you want.
Also for what it’s worth these ESP chips are unbelievably cheap when bought at scale. The box the product comes in is probably more expensive
E-waste prevention is hella important. It's a tough situation though. I think for a board that goes out in the public, even at a prototyping level, it's important to know the chip you have is not tampered with. I once wanted to make a little people counter at a university campus with an ESP8266. I simply couldn't make sure it's resilient against some CS students poking at it.
Let's say the chip was lockable, what would prevent someone from using a bit of hot air and flux from just swapping out your chip with whatever?
It takes little skill, 15USD soldering iron to solder 4-5 through holes wires, connect 10 usd programmer and flash a new firmware. Investing in a (de)soldering station, with the risk of pulling all the neighboring components, breadboard/something to plugin your new controller or memory into for programming and what not? I’m not sure if I’d go through that trouble
The firmware that's missing on that new chip
That's why you force a full erase to clear the non programmable bit?
Yes, but this I think is a bit sad. Once you have proper hard to crack security a smart kid from CS won't find it hidden decide to hack it and write blog post about it how he has done it. As long as it is not something dangerous sometimes having less security is better
Sounds like you're trying to deprive CS students of their practical education :P
Yes, it's one thing to prevent others from reading the flash, but I don't see the value in preventing reflashing it.
It counters evil maid attacks.
Not really, if the evil maid is sophisticated enough to bring their own firmware to reflash your devices, they could also just swap the PCB containing the controller or solder in a new chip
I chose the word counters rather than eliminate for the reason you outlined.
If you're the vendor, you can add a tamper-resistant or tamper-evident design to raise the cost of ,component-replacement attacks. Which can be countered by whole-device replacement, which in turn is countered by device identity attestation, amd so on, in an endless arms-race.
Tamper-evident stuff and device attestation solve all these problems even without preventing reflashing. If you can't check if the device has been tampered with or replaced, preventing flashing won't help. If you can, you don't need to.
Most IoT devices implement security and integrity using one time burnable registers (more importantly for keys). It's sad but yes, those devices are permanently bound to the vendor. There is no real alternative though, a TPM based approach makes it more complex and is another closed system.
You can. It's not that hard to physically put a new chip on. Software people are too afraid to get their hands dirty.
De-soldering an MCU and soldering on a new oneis typically far from trivial. We're typically not talking about dedicated big through-hole flash chips here. We're often talking about MCUs with integrated flash memory which are surface-mounted and often with pins on the underside.
In general, I find it easier to desolder and replace surface mounted parts. With those, you just have to hit it with some hot air, it melts all the solder at once, and lifts right away. The chips are small, so it's not too bad to heat the whole thing evenly with a small air gun.
Through hole parts need a lot more heat across a bigger area, or you have to go pin by pin. I've scorched many a through hole board trying to desolder something, cursing at those who didn't socket the chip in the first place.
Want to annoy a repair person? Pot the whole thing in epoxy.
The saving grace here is that ESPs have radios and require certification which is expensive. As a result it is common for vendors to use pre-certified modules that are provided on a modular PCB with castellated half-holes. These are relatively easy to desolder.
For ESP these modules are the WROOM line.
Usually in secure systems you can't read out all of the data from the old chip.
This was about reusing the chip. Since these things are built using finite resources, maybe we should also design them with that reality in mind?
Any alternatives that aren't consumer hostile?
Should we just be pushing harder for "Works with Homeassistant" certification?
I wonder if the power consumption is any better. An nRF runs circles around any existing ESP32 variants in terms of power.
Not to mention the horrific peak power draw. It took people a while to figure out that the things need a fair bit of close-by capacitance on the power rail or they crash.
How much capacitance? I built my own sensors based on ESP8266 and they've been flaky, and I wonder whether that's the issue.
Coming out of deep sleep and Wifi coming back up, I’ve seen upwards of 600mA
The annoying thing is this is only due to "calibration" which can, with some highly esoteric optimisations, be skipped. Depending on the application , this can realise massive gains in peak power, wakeup latency, and even average power. The whole process is hidden in a binary blob though and espressif will not elaborate on it, so it's very challenging to alter.
Wow, jeez, no wonder the USB won't power it...
That's 600 mA at 3.3V. USB should be fine, but of course you need the capacitance to deal with that. Radio PAs can be power-hungry beasts.
220uf would be the minimum. I've had modules get stuck in a boot loop when they have less then 120uf (inferred by repairing switch modules with bad capacitors).
It depends entirely on your power supply and layout. As evidenced by the fact that there are literally millions of working ESP32s out there in the wild with far less than 220uF capacitance on 3V3.
Usually you can fix it in software too.
For example, put a sleep(100us) as a hook before packet transmission to allow capacitors to recharge between packets.
Had to do this on a design powered by a cr2032 because the peak power draw from those batteries is really limited.
How many seconds of ESP32 power do you get from that cr2032?
A cr2032 has somewhere around 200mAh. The low-power ESP32-C3 uses somewhere around 20mA when its radios are off but the CPU is running and peripheral clocks are running, which gives a roughly 10 hour runtime. This calculation ignores the fact that the battery's voltage will drop below where the ESP browns out before it has delivered all its power (or assumes that it's regulated up at perfect efficiency), but most cr2032 batteries are a bit above 200mAh so it probably mostly evens out.
Though note that for most of its life, the cr2032 will deliver slightly below the ESP's minimum spec of 3V. From experience, that's not really an issue.
If you can get away with spending most of your time in the ESP's deep sleep state however, battery life is gonna be way better, and that's probably what you'd want to do if you're using a cr2032. In deep sleep, the ESP32-C3's data sheet says it consumes around 5µA (0.005mA). With 200mAh, this gives a battery life of 40 000 hours, or 4.5 years. At those time scales, battery self-discharge becomes relevant, so let's say a couple of years.
So a decent mental model is: you can do a few hours of cumulative compute over the course of a couple of years.
Unless you decide to boost the voltage of the cr2032 to be within the ESP's spec. In that case, the whole deep sleep discussion might be moot; I suspect the regulator's own power draw would absolutely dominate the 5µA of the ESP. But I'm not super familiar with the world of power regulators, maybe there are ultra low power regulators which can do the job with minimal losses even in the µA regime.
You might be able to do some cleverness with dc-dc boosting the voltage to charge a capacitor, then turning off the dc-dc converter until the capacitor is discharged. I haven't checked the numbers to see if that's workable though (capacitor leakage? does converter start-up use extra power?)
But your examples are all with the radio off. So then there is basically no point in using the ESP32. Better to use an MCU without radio and lower power consumption.
Well there's the use case of a device which gathers data and then uses its radio once every handful of hours to report that data to some service. If you're constantly using the radio you're gonna use way more power than my numbers of course.
I also don't think it's too unreasonable to use a C3 as an MCU in settings where a radio isn't required. The IC itself (aka not the wroom module etc) isn't that much more expensive than equivalently specced MCUs without a radio, and if you're already more familiar with developing for the ESP software ecosystem it might well be worth the few pennies extra. The ESP32-C3FH4X (so the IC with 4MB onboard flash, super easy to integrate into in a design) only costs $1.6 per unit on LCSC for 100 units (and mouser etc is similar).
This is looking pretty great, I've really wanted a MCU with Zigbee on it, for the various little battery-operated devices I've wanted to make. However, with Espressif's lineup, I've really lost track of what does what, lately.
Does anyone know of a good comparison resource?
The flashy PDF is here https://products.espressif.com/static/Espressif%20SoC%20Prod... a one-pager comparing all models.
Are we sure this is correct? The table shows ESP32-C5 supports CANFD but I cant find any info on CAN-FD peripheral, drivers, etc.
According to that pdf the ESP32-C5 does have Zigbee.
https://products.espressif.com/#/product-comparison
This is a little bit more interactive and detail-oriented. I think they also have flashy onesheet PDFs that are more marketing oriented.
The ESP32-C6 has a Zigbee radio. I have 6 myself -- they're great.
I bought a few of those, but at $8 they're a bit pricier than the $3 Espressif spoiled me with.
supermini boards with esp32c6 on it can be had for approximately 4 euro each.
Do you have any links? I couldn't find any.
Digikey has the modules for under 4 EUR in unit quantities, but they aren't the friendliest to integrate, since they only have pads on the bottom. I also found some boards with the bare chip for just over 4 EUR there, you can also find similar ones on AliExpress.
Nice, thanks! I didn't find them a few days ago when I looked, and I spend $40 for 5, when I could have gotten 10. Thanks again.
Does the Zigbee work well/as expected? Does it have lower power draw when doing Zigbee vs. wifi?
I'd like to know that too, have been considering doing some zigbee tinkering, and battery powered would be a requirement. I've read in some other comment that nRF would be much better in that regard. Need to do some googling for numbers...
All MCUs with 802.15.4 radios (STM32WB, nRF5x, ESP32, Ambiq, etc.) can transmit and receive Zigbee frames. The real issue with Zigbee is full software support.
I wanted a board with Zigbee support but $21 ($16 plus $5 shipping) is quite expensive for a single board.
Would an esp32 be the best soc for LoRa? I don't need WiFi or BT, which I know I can turn off to save power. Contemplating trying STM32 instead, don't have experience programming it yet.
No. All you would need is a SPI interface or whatever the Lora module speaks. The most basic microcontroller can do that. Any ESP32 is overkill for this.
Right. Sorry I meant development boards. This is low volume, probably will only make a few a year. The nice thing about the esp32 is I can get the board with lora/display built in, with a battery, for $25 each. The stm32 board and module is only a tiny bit less, for example. If I wanted to I could do this with a cheap 8bit microcontroller, I would just have to design a custom board with an oscillator to modulate the IR LEDs etc. I was gonna do that with PWM from the microcontroller to "simplify" things.
Edit: oh the MSP430 is neat! If I cared about battery life (driving 200ma of LEDs anyway...) I'd totally use that.
The XIAO RP2040 looks perfect for what I'm doing, actually, and draws less power.
https://www.seeedstudio.com/LoRa-E5-Wireless-Module-Tape-Ree...
Use something based on the STM32WLE5JC or a discrete SX1262.
The heltech boards are pretty popular, also have some nice stuff like a screen and a lipo charge controller integrated. They’re really popular with the Meshtastic community.
Hopefully p4 will be released soon too!
I just ordered a P4 dev kit from Amazon yesterday, ETA is Wednesday! https://www.amazon.com/dp/B0F63FQB8D?ref=ppx_yo2ov_dt_b_fed_...
did they ever announce the price of p4?
This microcontroller, like all microcontrollers Espressif released in the last few years, uses RISC-V as the ISA.
I believe the C series is RISC-V, not the S series.
Xtensa is a dead end, they said so on ESP32.com when someone pointed out the FPU ABI bottleneck
I'm sure you're right. The current s3 chip is based on Xtensa, but it was released in 2020, so I guess the OP's statement is correct.
Edit: dead end for espressif.
Still gonna be in a bunch of DSPs and stuff
Espressif made a decisive shift to RISC-V[0], effectively abandoning Tensilica.
ESP32-S3 was, AIUI, their last non RISC-V chip.
It was announced in 2020 and released in 2022.
0. https://www.hackster.io/news/espressif-s-teo-swee-ann-confir...
From the link
>Espressif Systems (SSE: 688018.SH) announced ESP32-C5, the industry’s first RISC-V SoC that supports 2.4 GHz and 5 GHz dual-band Wi-Fi 6, along with Bluetooth 5 (LE) and IEEE 802.15.4 (Zigbee, Thread) connectivity. Today, we are glad to announce that ESP32-C5 is now in mass production.
This wording is ambiguous- it's the first to support 5GHz, but it's not their first RISC-V core.
Genuine question: is that a good or bad thing?
It's a big plus if you want to write code for it in something like Rust. LLVM support for the architecture they used on their older chips (xtensa) for a very long time required compiling a fork of LLVM and rustc in order to target the chips. It may still, I didn't keep up with the effort to upstream that target. RISC-V is an open architecture that has a lot of people excited so compiler support for it is very good. Though as far as why Espressif is using it, it feels likely they would use it because it means they don't have to pay anyone any royalties for the ISA.
It's a mix.
Better compiler support for RISC-V, but everything I've seen from them is a much shorter pipeline than the older Xtensa cores, so flash cache misses hit it harder.
Both RISC-V and Xtensa suffer from the lack of an ALU carry bit for the purposes of improving pipelining. But for these small cores it means 64-bit integer math usually takes a few more cycles than a Cortex-M Arm chip
But that also depends on what you use it for. If you're after the wifi and IO and other nice things for a mostly idle device - the pipeline is almost irrelevant. Esphome can run on older versions just fine too. On the other hand if you're doing something very optimised and need tight timing around interrupts to drive external hardware - it may matter a lot.
So... depends on the project.
The Xtensa variants also come with dual core options, which means you can offload timing sensitive stuff to a dedicated core.
My playing with C3 betrayed that you have to use much larger buffers for things like i2s to make it work without glitching.
I also found splitting interrupts between the two cores helps with latency, but even if one core has only a single interrupt, that interrupt latency is increased compared to a single core system with a single interrupt. I suspect this is at least partly because they only put a single fetch pipe between the instruction cache and the crossbar.
Absolutely correct.
I think it would be hard to argue that an ALU carry bit was a good idea, even if 64-bit maths takes a few more cycles.
There's definitely a trade-off between things that seem relatively simple to ISA but can really complicate the pipeline.
Xtensa pays for it with crippled 64-bit performance, which has a lot of downstream impacts. Ex: division by a constant is also slower. Most compilers don't even bother fast pathing 64-bit division by a constant.
I was surprised to find Apple kept ADC/ADCS in aarch64. Maybe this ends up being one of those things that's less useful or potentially a bottleneck depending on the specific implementation. Edit: backwards compatibility probably.
The fact that a few cores have bolted it on to RISC-V makes me think I must not be alone in missing it.
Unless you're a shareholder of arm, hard to see how it's a bad thing.
The other core they've used is Xtensa
Let's hope they will finally enable the USB Host HID Class Driver to support non-boot protocol devices this go around.
Any chance you could explain this to somebody who is just learning about HID and has run this example: https://github.com/espressif/esp-idf/tree/master/examples/pe... ? "non-boot protocol" I'm guessing is the key here? I don't have a super deep understanding of HID or what the "boot-protocol" refers to.
The USB HID protocol is designed to support basically any device that regularly reports a set of values; those values can represent which keys are pressed, how a mouse has moved, how a joystick is positioned, etc. Now, different devices have different things that they support: joysticks have varying numbers of axes, mice have different sets of buttons, some keyboards have dials on them, etc. So, there's no single format for a report that simultaneously efficiently uses bandwidth and supports all the things a human interface device might do. To solve this, the HID protocol specifies that the host can request a "report descriptor" that specifies the format and meaning of the status reports. This is great for complex devices running a full OS; there's plenty of memory and processing power to handle those varying formats. However, these HID devices needed to also work in very limited environments: a real mode BIOS, microcontroller, etc. So, for certain classes of device such as keyboards and mice, there is a standard but limited report format called the "boot protocol". IIRC, the keyboard version has space to list 6 keys that are pressed simultaneously (plus modifiers), all of which must be from the same table of keys in the spec, and the mouse has an dX and dY field plus a bitfield for up to 8 buttons (four of which are the various ways you can scroll). To implement a more complex device, you'd want to be able to specify your own report format, which the ESP driver doesn't seem to allow you to do.
Thanks for taking the time to explain!
So your original comment / request was regarding USB specifically then?
I ask because I'd have guessed (possibly incorrectly!) that implement HID via GATT (BLE) you'd be able to support anything the BLE hardware revision could implement?
Perhaps the disconnect for me is that it's unclear when there is some special hardware that exists within the ESP32 itself (I think I2C, I2S, etc would be examples of this) vs something you are just implementing by manipulating the IO pins. Perhaps HID is one of those things?
The BLE and USB peripherals are separate, so limitations in USB HID do not carry over to BLE HID.
That was a virtuoso explanation! You’ve solved about 25 years of USB questions I’ve had in one post. Thank you very much.
Take an afternoon to play around talking to devices through libusb.
It was an eye opener for me.
Language does not matter (I used Go and Ruby) as long as bindings are reasonable.
Is this why certain USB keyboards I have don't seem to do anything in BIOS? I keep around a really dumb/boring dome keyboard for this purpose.
Maybe. If you have Linux the command `lsusb -v` gives you a verbose breakdown of the attached USB devices, if you find your keyboard it will show which interfaces it provides (a USB device can offer several if it wants) and to work at boot you want:
interface class 3 (a Human Interface Device) sub class 1 (Boot protocol or Boot interface)
In contrast the sub class 0 of HID is just the ordinary case, which is arbitrarily complicated (six thousand keys and four axis input? Why not) and so understandably a BIOS or similar environment might not implement that but a full blown OS usually does.
Doubtless tools exist for other platforms which can show the same information as lsusb
I'll give you my anecdote. I'm building a device that reads the input of a USB game controller. In my case, it's a Sim Steering Wheel. I ended up needing to incorporate a MAX3421e USB Host chip to read the HID input, because the ESP firmware doesn't have this implemented. Hardware wise, all ESP32 chips with hardware USB could do this, but they haven't prioritized it in software. Some keyboards and Mice use a protocol called "boot protocol", and you can get those to work. It's not very common in game controllers though.
Any guesses as to when a hobbyist might be able to buy the module without the dev board? Their aliexpress store didn't have them as far as I can tell, I assume they are prioritizing dev boards for the moment unless you're a big enough company to actually talk directly with Espressif.
Espressif won't sell you 3000 of these even if you ask - regardless of who you are.
Contact them directly and you might get 10 at this point.
They don't have datasheets up for the modules yet sadly
Can anyone answer my question will the C5-WROOM be a pin for pin dropin replacement for a C6?
It's on their store at https://www.aliexpress.com/item/1005008790788462.html
Thanks for the link, but yeah as the other poster mentioned, this is for a dev board. I'd be interested in buying the module (which is somewhere between the bare IC and the dev board). Here is an example from their store of an ESP32-S3 module: https://www.aliexpress.com/item/1005006334720108.html?pdp_np...
That's a dev board.
Digikey?
The only listing they currently have is for the dev board, and it's not stocked yet.
Is the 10k unit price public?
This is the real missing information. I don't know how much to care (in my professional capacity) without some pricing indication.
Professionally I use the esp8266 a bunch because it's still cheapest - but the lack of 5Ghz is really starting to bite as customers complain it 'doesnt work'
Bought a bunch of ESP32-C3 Supermini boards for 1.05€ each. Incredible value.
Honestly, shipping stuff direct from China and paying more for it here always trips me out, but it's kinda the norm now. You think it'll get easier to actually repair or reprogram stuff or are we just stuck tossing hardware once it's locked down?
Too bad after tariffs, this little guy would have cost $10-15 and now will probably be $50 +
I feel like the Trump admin is going to have to make a carve out for the esp32 or certain Espressif products. So many IOT businesses going to go out of business if these MCs baloon in price.
These are exempt from recent tariffs (HTS 8542).
Alright, I figured that would have to be the case. Do you know any resources that break down what is exempt and what is not, without having to look up the individual HTS codes?
https://content.govdelivery.com/accounts/USDHSCBP/bulletins/...
I wish there was something more powerful than STM32H7 or RT1070 available. It would be awesome to be able to compute complex algorithms in real time.
how much memory does the dev kit have? it’s not clear after following links off that article.
If it's like the other ESP32s with PSRAM support then 2-8MB most likely. IIRC it is addressed in the same way as the NAND, so the more RAM the less NAND you can have.
Maybe not applicable for this new one, but that's my understanding for the S3/C5 models. (something like 16mb NAND and 8mb PSRAM)
This image says 384 KB of RAM: https://docs.espressif.com/projects/esp-dev-kits/en/latest/e...
It also says 320 KB of ROM, which seems low. Judging from the product name (DevKitC-1-N8R4) and their other products, it has 8 MB of flash.
Maybe 320KB is the size of the bootrom? Although I don't see why that would be worth mentioning.
onboard memory is limited. PSRAM is available
Does it have floating point hardware?
Does it have CAN?
How does the core compare to their old ones?
I'm a little disappointed that it only has one core even though I haven't used the second one on the older chips yet.
At least on Arduino, the second core is used for wifi.
So you can't really use it yourself unless you don't want the wifi to be reliable.
I was able to find a preliminary datasheet on Google, it looks like it has 2x CAN (called TWAI in the datasheet). I can't find info on whether it has an FPU or not
https://www.erlendervik.no/ESP32-C5%20Beta_ESP32-P4_ESP8686_...
https://docs.espressif.com/projects/esp-dev-kits/en/latest/e...
Pinout for the dev board.
Yeah, I too wonder when they will release their first multi-core RISC chip. I guess it's not that easy.
$126/ea in quantity after tariffs.
Why do I have this sickening feeling that in a few years anyone doing anything with hardware is going to be ordering everything direct from China, like we're some kind of undeveloped client state?
You are already living in this reality. And it has already been happening for quite a few years by now. People that are especially vocal and badly hit by tarrifs at the moment, are the people who have already been doing just that.
This transition happened so quickly that most people haven't fully cought up to the implications to the full extent. In my mind, China is already the center of gravity.
We've still got local US distributors though, regardless of everything being made in China. Like if you decide you need something tomorrow, you can go on Amazon and get most things pretty quick (despite overpaying 1.5-2x compared to Aliexpress). And there's a whole cottage industry of 3d printing shops selling canned solutions to people who don't want to hunt Aliexpress themselves.
It's been well over a decade since I was doing embedded design professionally, so my perspective is coming more from a hobby/3d printing/"maker" place. But it feels like one of the main results of these tariffs is that the bottom is going to drop out on Chinese and Chinese-adjacent sellers preloading so much stuff into US warehouses ahead of sale, and instead just shipping orders direct from China. Using a US warehouse means the seller has to front the money for the tariffs as well and takes a risk of them being lowered depending on Krasnov's whims. Whereas shipping direct from China, even if the seller is handling the tariffs (eg Aliexpress Choice), they've already got the cash in hand from a confirmed purchase.
Which tariffs? Microcontrollers are exempt from recent reciprocal tariffs.
I was just being facetious about the general pain we're feeling from the US's new tariff-based national sales tax. And I haven't been following any reciprocal actions for what is now expensive to get into China. Are -C5's only made in Taiwan or something?
No, ICs are explicitly exempt (along with a lot of other electronics parts categories.)
Ah, well that's handy to know. For this week, at least. I've just been watching the 3d printer parts I just squeaked in under the de minimis wire triple in price due to the new taxes.
We already are!
They link to a $15 developer board on Aliexpress (much the same as the rest of the ESP developer boards floating around for years) which is now inflated to $35 with tax, shipping, and tariff.
My impulse purchase has been tempered with "eh, do I really need it?"
Don't confuse people. It's only like that in a single country that seems hell bent on making itself not matter that much anymore.
Even $15 is on the high end for Espressif dev boards. Not that it's saying much.
If all you need is Zigbee/BLE and a few IO pins, an nRF52840 dongle is still $10 on DigiKey.
These are ones actually made by Espressif and limit is one per person (presumably supply issues as they ramp up mass production), certainly there will be dozens of clones soon.
Fortunately it’s only £16.40 with VAT and shipping to the UK. Approx $21.85. Comparable to the £9 M5Stack AtomS3 Lite (ESP32-S3) I picked up from Pi Hut recently.
This should help all the US-based RISC-V microcontroller companies though, right? /s
Finally all the mom and pop chip fabs running out of a garage get a fighting chance.
Can't wait to buy from some west coast hipster with a garage fab now that it's economically viable. /s
The really criminal thing is it only costs $8.43 to mail that thing from China to your house in the USA... it likely would cost you more to mail that same item to yourself from yourself.
That alone puts US-based sellers at a mega disadvantage compared to cheap Chinese goods - and it's not a good thing.
Most of my Aliexpress electronics orders are shipped to a local US Aliexpress distribution center which then mails them locally. These come in small padded envelopes which are not expensive to ship.
Well they all get made in China, why should i need to pay a US middleman
>The really criminal thing is it only costs $8.43 to mail that thing from China to your house in the USA... it likely would cost you more to mail that same item to yourself from yourself.
These things are tiny and very cheap to ship. I could probably pack 40 of them into a USPS flat rate box shipped anywhere in the US for $9.30.
This was a decent argument when you could get things shipped from China for $0.50, but not now or in this case.
Yeah if I were mailing a single one of these to myself I bet I could get away dropping it in a regular envelope with maybe an extra stamp. (Assuming they come without the headers soldered on like most of the clones I’ve got from AliExpress, RIP).
[dead]
from my understanding RISC-V chips are slower and more expensive and less optimized compilers, so why in the world would an end user use one?
No? Performance is implementation specific, they’re usually cheaper than ARM since there’s no core ISA license overhead, and while the core instruction set being extremely limited does cause a little bit of tension in compiler land, most cores with a baseline set of extensions get reasonable code generation these days.
One of the main reasons RISC-V is gaining popularity is that companies can implement their own cores (or buy cheaper IP cores than from ARM) and take advantage of existing optimizing compilers. Espressif are actually a perfect example; the core they used before (Xtensa) was esoteric and poorly supported and switching to RISC-V gives them better toolchain support right out of the gate.
You are really only correct in your last point as the advantage of RISC-V is to the company implementing their own core, not to the end user.
The reason is that CPU cores only form a tiny part of the SOC, the rest of the SOC is proprietary and likely to be documented to whatever level the company needs and the rest if available hidden under layers of NDA's. Just because the ISA is open source does not mean you know anything about the rest of the chip.
saying that, the C5 is a nice SOC, and it is nice that we have some competition to ARM.
But where do the original Xtensa cores place then?
If anyone from Espressif seeing this, I love your MCUs. But can you please improve the ESP-IDF so that it's usable on BSD systems. The Linuxisms baked into its build system is unnecessary.
I think moving from Make in the old version of IDF to CMake was a mistake.
Love it or hate it, CMake is more or less the de facto build system for C/C++
And just like any build system for everything language/stack, there is a small group of hardcore "enthusiasts" who create and push their true build tech to rule them all and then there is the large majority of people who have to deal with it and just want to build the damn thing.
I didn't mean to sound like a hard-core BSD enthusiast, sorry. I was just very frustrated when they moved to cmake in their newer IDF, they added useless things that excluded BSDs. In its current state, it's untenable to patch it to make it work. This wasn't caused by the use of cmake. They could've moved to cmake, but done so with other OSes in mind, esp since they're already in our neighbourhood.
Hate it, definitely hate it.
I mean, I use it, but I'm not very happy about it.
It should generally be easier to make a CMake buildsystem work well on the BSDs than hand-cobbled Makefiles, in terms of opportunities to introduce Linuxisms.
I wasn't really clear there. The Linux-specific stuff wasn't caused by Cmake. They are two independent things that happened as part of the same upgrade.
Considering that they are supporting Linux, there was no real reason to make it so Linux-specific that all other Unix-like systems got excluded.