Chinese project aims to run RISC-V code on AMD Zen processors
Why not re-write the code for x86 CPUs?

Last month, a team of Google security researchers released a tool that can modify microcode of AMD's processors based on the Zen microarchitecture, the Zentool. While this is a security vulnerability, for some, this is an opportunity; Members of the Chinese Jiachen Project are running a contest with an aim to develop a microcode for AMD's modern Zen-based CPU to make them execute RISC-V programs natively. The ultimate goal could be building an ultimate RISC-V CPU using already available silicon.
x86 is a complex instruction set computer (CISC) instruction set architecture (ISA) developed some 48 years ago. However, internally, modern x86 cores rely on proprietary engines running a reduced instruction set computer (RISC) ISA to handle complicated instructions. The internal RISC ISAs are not documented, but they should generally be similar to well-known RISC ISAs, such as Arm or RISC-V. CPU microcode is a low-level layer that translates complex x86 CISC instructions into simple RISC-like internal instructions the CPU hardware executes. CPU microcode is only supposed be modifiable by CPU vendor, but sometimes this is not the case and apparently some parts of AMD's Zen 1/2/3/4 microcode can be changed using the Zentool.
The Jianchen Project members want to find someone, who can modify AMD's Zen CPU microcode on a modern processor — say, an EPYC 9004-series — to execute RISC-V binaries. The patch is expected to either enable direct execution of RISC-V programs or significantly boost their runtime speed compared to emulation using the same hardware. The work must be tested using RISC-V versions of benchmarks like Coremark or Dhrystone. A complete submission includes binaries or source code, configuration files, dependencies, and test instructions. If only binaries are submitted before the deadline on June 6, identical source code must be added via pull request later. The winner will get ¥20,000 (approximately $2,735).
AMD's EPYC 9004-series and similar processors offer performance and core counts not achievable on currently available RISC-V-based processors, so executing proprietary RISC-V programs on EPYCs is a plausible idea. However, microcode is designed to fix internal bugs rather than replace the front-end ISA completely and it is even unclear whether the microcode can be completely re-written, people over at Ycombinator noted.
Back in the mid-2010s, AMD planned to offer both x86-64 and Armv8-A Zen CPUs (something recently recalled by Mike Clarke, AMD's chief architect), so it is highly likely that there was a microcode for the Zen 1 microarchitecture that supported an Aarch64 front-end ISA. That said, Zen 1 CPUs could feature multiple microcode layer 'slots,' one supporting x86-64 and another Aarch64. We doubt this is the case though as modern CPUs have very thorough hardware performance optimizations that include hardwire optimizations between the microcode and the rest of the core. AMD has hardly ever developed a microcode that supports Aarch64 or RISC-V for Zen 2/3/4 processors and therefore the microcode layer of these CPUs is strictly x86-64 and there is hardly enough microcode space for re-writing them from scratch.
"This is not achievable," one commenter named Monocasa wrote. "There is not enough rewritable microcode to do this even as a super slow hack. And even if all of the microcode were rewritable, microcode is kind of a fallback pathway on modern x86 cores with the fast path being hardwired decode for x86 instructions. And even if that were not the case the microcode decode and jump is itself hardwired for x86 instruction formats. And even if that were not the case the micro-ops are very non-RISC."
One commenter criticized the contest format, suggesting it is a way to get complex work done for less than $3,000 pay.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
In general, while the concept of re-writable microcode is an interesting one and stimulates discussion about alternative CPU designs, multi-ISA support, and low-level optimization, it does not look like the contest will achieve the stated goal. Perhaps, re-writing (or rather re-compiling) a RISC-V program or two for x86 CPUs makes more sense?

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
bit_user
First of all, I just want to point out that this contest was linked in a prior article:The article said:Members of the Chinese Jiachen Project are running a contest with an aim to develop a microcode for AMD's modern Zen-based CPU to make them execute RISC-V programs natively.
https://www.tomshardware.com/pc-components/cpus/amds-microcode-vulnerability-also-affects-zen-5-cpus-granite-ridge-turin-ryzen-ai-300-and-fire-range-at-risk
As with that article, you linked the Chinese language version of the page. At the top, there's a way to switch it to English, which I'll link here:
https://rvspoc.org/en/S2502/?persistLocale=true
And, as I commented on that article, what the contest actually says is:
"Complete microcode modifications for Zen-series CPUs to enable direct execution of RISC-V binary programs. If direct execution is not feasible, optimize the modified microcode to achieve a significant level of acceleration for RISC-V binaries."So, they're not asserting that it's indeed possible (which I doubt), but leaving open the possibility that contestants can at least devise some microcode tweaks that improve the efficiency of RISC-V emulation or JIT execution.
There's been some reverse-engineering work done. For those who know a little about assembly language and want to get a rough idea what the micro-ops look like, start here:The article said:internally, modern x86 cores rely on proprietary engines running a reduced instruction set computer (RISC) ISA to handle complicated instructions. The internal RISC ISAs are not documented, but they should generally be similar to well-known RISC ISAs
https://github.com/google/security-research/blob/master/pocs/cpus/entrysign/zentool/docs/reference.md
They should really focus on Zen 5, exclusively. It has 3x the microcode table size, which will almost certainly be necessary for the more interesting and profitable hacks. Note the author ( ;) ):The article said:The Jianchen Project members want to find someone, who can modify AMD's Zen CPU microcode on a modern processor — say, an EPYC 9004-series — to execute RISC-V binaries.
https://www.tomshardware.com/news/amd-set-to-substantially-increase-microcode-size-of-future-cpus
The ARM core, based on Zen, was internally called the K12. It's the last thing Jim Keller was known to work on, before he left AMD (for the second time).The article said:Back in the mid-2010s, AMD planned to offer both x86-64 and Armv8-A Zen CPUs
Not likely. Everything that's come out about it suggests it was a distinct core, just that it differed mostly in the front-end.The article said:so it is highly likely that there was a microcode for the Zen 1 microarchitecture that supported an Aarch64 front-end ISA. That said, Zen 1 CPUs could feature multiple microcode layer 'slots,' one supporting x86-64 and another Aarch64.
There have been CPUs which did some sort of realtime translation/emulation. I think Transmeta was one example. I seem to recall that Itanium also had some sort of x86 mode, which I believe involved some kind of hardware-assisted emulation. However, the resulting performance was quite lackluster, I'm sure mostly owing to the limitations of IA64.
Yeah, my thought was that they'd do better by getting people to collaborate. However, the contest certainly sparked interest and garnered attention. Maybe they'll try to combine some of the best ideas from contest entrants into an open source project, afterwards, and welcome contestants to stay involved.The article said:One commenter criticized the contest format, suggesting it is a way to get complex work done for less than $3,000 pay.
The real goal is just to increase RISC-V execution performance. Whether you can achieve native execution or not is sort of a detail. A secondary benefit (not sure if it's actually a goal) is to crowd-source reverse engineering of AMD's micro-op language, which should shed further insight into the microarchitecture of their CPUs.The article said:it does not look like the contest will achieve the stated goal.
I think a fruitful example of the sort of thing that might improve RISC-V emulation is this hack Apple made to the ARM ISA, in order to facilitate faster x86 emulation:
https://dougallj.wordpress.com/2022/11/09/why-is-rosetta-2-fast/ -
ezst036
x86 is the long-awaited successor for RISC-V. :)Admin said:Why not re-write the code for x86 CPUs? -
mtrantalainen
The offered prize money (less than $3000) is way to little for this complex task.Admin said:A new contest inspired by Google's Zentool challenges developers to modify AMD Zen CPU microcode to run RISC-V programs natively, but experts argue the goal is unfeasible.
Google tool spurs contest to Run RISC-V on AMD Zen CPUs: But is it possible? : Read more
I think you could realistically see some submissions if you increase the prize 100-fold. -
ThereAndBackAgain One question: why? Is the idea to make the development of RISC-V apps easier? Surely it's not to satisfy the great demand for being able to run all the many exclusively-RISC-V apps on X86? Or is this more a "because it's there/because we can" type of venture?Reply -
bit_user
Right. You wouldn't do it just for that. I think it's enough to draw in people who are naturally interested in hacking on it and getting them to actually make a formal submission.mtrantalainen said:The offered prize money (less than $3000) is way to little for this complex task.
Let's see what kinds of entries they get. -
bit_user
There are a lot of AMD CPUs out there that are vulnerable to this exploit. Right now, I believe most RISC-V development does happen using emulators. So, the upside to making those emulators faster is definitely real.ThereAndBackAgain said:One question: why? Is the idea to make the development of RISC-V apps easier?
As I said, another benefit is crowd-sourced reverse-engineering & documentation of AMD's internal micro-ops. You can probably imagine various uses such information might have.
Several major Linux distros already support RISC-V and their CI pipelines currently rely on emulators running on x86 and ARM servers, because how else do you bootstrap something like that before we even have big iron RISC-V CPUs?ThereAndBackAgain said:Surely it's not to satisfy the great demand for being able to run all the many exclusively-RISC-V apps on X86? -
rluker5
Another example of CPUs that did realtime emulation/translation (of x86 to ARM) are the decade old atom chips that ran native android in the Asus Zenfone 2 and Leagoo T5C.bit_user said:There have been CPUs which did some sort of realtime translation/emulation. I think Transmeta was one example. I seem to recall that Itanium also had some sort of x86 mode, which I believe involved some kind of hardware-assisted emulation. However, the resulting performance was quite lackluster, I'm sure mostly owing to the limitations of IA64.
The real goal is just to increase RISC-V execution performance. Whether you can achieve native execution or not is sort of a detail. A secondary benefit (not sure if it's actually a goal) is to crowd-source reverse engineering of AMD's micro-op language, which should shed further insight into the microarchitecture of their CPUs.
I think a fruitful example of the sort of thing that might improve RISC-V emulation is this hack Apple made to the ARM ISA, in order to facilitate faster x86 emulation:
https://dougallj.wordpress.com/2022/11/09/why-is-rosetta-2-fast/
Those chips did not perform badly at the time and have pathetic performance by modern standards. (The Z3775 in my old windows tablet gets 47 single/ 178 multi in CPU-Z and is a slightly faster version of the Z3580 found in the Zenfone 2). I don't know how much faster ARM chips have gotten over the last decade, but Zen5 is almost 20x as fast in single core as these old Atom chips. If ARM hasn't increased single core performance by an order of magnitude maybe there is something to be gained. But if ARM chips have, then running RISC-V on x86 is probably a stopgap at best. -
bit_user
I don't know if they had/used an ARM emulator in those phones, but it wasn't hardware-based, which is what I was talking about.rluker5 said:Another example of CPUs that did realtime emulation/translation (of x86 to ARM) are the decade old atom chips that ran native android in the Asus Zenfone 2 and Leagoo T5C.
Android was originally (and I think still predominantly) Java-based. Java is compiled to a portable byte code, which is then JIT-compiled to run on your native architecture (either at runtime or I think Android does it when you install the app). This JIT-compilation scheme has been the norm for several decades, if not all the way back in the 90's.
Java used bytecode since the beginning, with portability across different CPU ISAs being the main goal - it was developed by Sun Microsystems, who had their own RISC CPUs, but positioned as a web standard that would enable browser-based applets to run on Macs, PCs, Sun workstations, etc. It was even incorporated into the blu-ray standard, for this very reason.
Android does allow apps to bundle some native code, but it's fairly rare for them to do so. It's mostly just used by things like game engines. The Android NDK has support for compiling to multiple targets, including x86. So, even if an app included some native code, it could bundle an ARMv7, ARMv8, and x86-64 version of that code. -
mikeztm
It’s quite common for android app to have native components. For security reasons more so than performance reasons.bit_user said:I don't know if they had/used an ARM emulator in those phones, but it wasn't hardware-based, which is what I was talking about.
Android was originally (and I think still predominantly) Java-based. Java is compiled to a portable byte code, which is then JIT-compiled to run on your native architecture (either at runtime or I think Android does it when you install the app). This JIT-compilation scheme has been the norm for several decades, if not all the way back in the 90's.
Java used bytecode since the beginning, with portability across different CPU ISAs being the main goal - it was developed by Sun Microsystems, who had their own RISC CPUs, but positioned as a web standard that would enable browser-based applets to run on Macs, PCs, Sun workstations, etc. It was even incorporated into the blu-ray standard, for this very reason.
Android does allow apps to bundle some native code, but it's fairly rare for them to do so. It's mostly just used by things like game engines. The Android NDK has support for compiling to multiple targets, including x86. So, even if an app included some native code, it could bundle an ARMv7, ARMv8, and x86-64 version of that code.
And back in the day armv7 was the only meaningful target from the then 2 available Armv7 and MISP. Intel have to bundle the code named Medfield arm translator to run those armv7 native binaries.
Medfield is pure software based and can run on any then modern Intel and AMD processors. It was included in a lot of “Android emulator” software including bluestack. -
bit_user
Medfield was the name of Intel's phone SoC, not the ARM -> x86 binary translator.mikeztm said:It’s quite common for android app to have native components. For security reasons more so than performance reasons.
And back in the day armv7 was the only meaningful target from the then 2 available Armv7 and MISP. Intel have to bundle the code named Medfield arm translator to run those armv7 native binaries.
Medfield is pure software based and can run on any then modern Intel and AMD processors. It was included in a lot of “Android emulator” software including bluestack.
However, thanks for the mention. It helped me find this description, from which I quote:
"The OS isn't an issue as it has already been ported to x86 and all further releases will be available in both ARM and x86 flavors. The bigger problem is application compatibility.
There's already support for targeting both ARM and x86 architectures in the Android NDK so anything developed going forward should be ok so long as the developer is aware of x86.
Obviously the first party apps already work on x86, but what about those in the Market?
By default all Android apps run in a VM and are thus processor architecture agnostic. As long as the apps are calling Android libraries that aren't native ARM there, once again, shouldn't be a problem. Where Intel will have a problem is with apps that do call native libraries or apps that are ARM native (e.g. virtually anything CPU intensive like a 3D game).
Intel believes that roughly 75% of all Android apps in the Market don't feature any native ARM code. The remaining 25% are the issue. The presumption is that eventually this will be a non-issue (described above), but what do users of the first x86 Android phones do? Two words: binary translation."
... by intercepting ARM binaries and translating ARM code to x86 code on the fly during execution Intel is hoping to achieve ~90% app compatibility at launch."
Source: https://www.anandtech.com/show/5365/intels-medfield-atom-z2460-arrive-for-smartphones/5