Monday, June 19, 2023

YARC (Yet Another Retro Computer)

 Over the past two years I architected, designed, and built YARC, a 16-bit computer. It looks like this:

As you can see, it's implemented using a sort of "wire wrap over solderless breadboard" style. The solderless breadboards are stuck down to a ground plane. Copper bars on bolts (insulated from the ground plane by plastic washers made of high-temperature PEEK plastic) carry +5 volts to each of the breadboard power strips through pairs of 30ga wires. The pictures below will help illustrate the construction process.

Unfortunately, images uploaded to Blogger have low resolution. The images in this posting, and a few more, in addition to all the design artifacts (hardware and software) are on Github: project root, latest images, PDF of KiCad schematicdocumentation. The images on GH have much higher resolution (4032x3024).

We moved into our current home in January 2021, and it offered the most important thing of all: a place to work.

First experiments were with the clock ...

 

...and the physical packaging. I'm a nut about power and ground being solid. But soldering to the ground plane and copper bars is hard ... they carry away a lot of heat. You can see the PEEK plastic washers, which don't melt at soldering temperatures, isolating the tall (+5v) bolts from the ground plane. I also had to grind away a small amount of ground plane (copper cladding) under the washers to prevent small, side-to-side motion of the tall bolts from shorting to ground. 


I had decided the YARC would be permanently tethered to the host computer and have microcode in RAM because I didn't like popping EPROMs in and out of sockets. I investigated various ways to do this, and finally concluded that a USB connection would get YARC on the air the soonest. An Arduino, equipped to read and write on the YARC's buses, seemed like the simplest way to achieve this.

Around this time I stumbled on to the "wire wrap over solderless breadboard" approach. In late 2021, I had this:


74HC574 bus interface registers at left and a couple of '138 decoders for pulse outputs above the Nano.

Next, I built out the clock and an 8-bit display register to the Nano's "north" and started on the YARC's 32k main memory subsystem:


The picture shows testing the memory controller with the 8k x 8 SRAMs not yet installed. There's still just one YARC "module" (1' x 1' copper-clad board).

Here, you can see the memory subsystem complete and a second YARC module. The instruction register lies to the "west" of the memory module, and the controls for the microcode RAM are being wired. The two YARC modules are still sitting side-by-side on an antistatic mat at this point, not attached to a base.


It came time to build out the additional two modules, buy some plywood, and attach all four modules. This was, I think, summer 2022.


With the microcode engine complete I began wiring the general registers and connecting the "real" (microcode) control signals to their various points - up to this time, everything had been controlled from the Nano. Control signals are mostly on white wires, external buses are green, internal buses (isolated by transceivers) are blue, and clocks are yellow.

There are an excessive number of isolated buses for such a simple machine because of my decision to implement microcode and even the ALU as RAM - making these spaces writable had serious consequences to complexity that I did not appreciate when I made the decision to go this way.

The results are still cool, though.


And finally, June 2023, the basic design is complete:


The image is rotated 90 degrees, with the Nano ("Downloader") at upper right. The clock circuit and the Nano's 8-bit display register are at top center. Proceeding down the right side of the image from the Nano are main memory, the instruction register, microcode controls, and the 8k x 32 bit microcode RAM. At center closest to the camera is the dual-banked array of four general registers, capable of two reads and one write per cycle, implemented with 74HC670s and multiplexers.

The three large RAM chips at lower left form the ALU, a carry-select adder implemented with lookup tables. The ALU is 8 bits wide and requires two cycles to do any operation. The flags logic and ALU controls are above them, adjacent to a small pair of pliers that volunteered for the photo.

The "completed" YARC is desperately short of address registers: only the four registers can drive the address bus, and two of them must serve as PC and SP. Fortunately, it will be simple to add a couple of transceivers from the non-architectural ALU output holding register to the address bus. This will at least allow for direct addressing and for (register indirect + immediate offset) against any of the four registers.

Now I need a lot of microcode. In yarc/yarc/pkg/asm there is a assembler program of a somewhat unique design, supporting a combination of microcode authorship for opcode definition with traditional assembler programming using the defined opcodes. Some code I've written is in yarc/yasm. Once I have enough of the instruction set defined, I hope to write a translator for a small subset of wasm to YARC assembler. This would allow C programming using any compiler than can produce wasm (e.g. Gnu C).

As you can see from the pictures, the basic 16-bit computer with writable control store and ALU functionality fit in 3/4 of the available hardware space. I'd like to implement a very basic VGA adapter, probably something more like Ben Eater's design than James Sharman's, and then implement a tile-like game that will remind observers of Tetris. I can't implement real Tetris because of intellectual property issues, but I'd like to be able to claim I got "YARC to Tetris" working, someday.





Friday, March 10, 2023

Now It Can Be Told

Testing Trident Ballistic Missiles

Note: I have never held a security clearance. The information below is 40 years old; it predates GPS. I'm reasonably sure it's all completely obsolete.

Introduction

I began my working career in the Santa Barbara, CA area, where the small tech community is occasionally called "Silicon Beach". During the Bummer Summer of '76 (so-called because El Niño conditions kept it cool and foggy), I worked assembling the world's first digitally frequency-synthesized dual-band ham radio, the Comcraft CST-50. In December of that year I got a similar circuit board assembly job at Sonatech, a local maker of underwater equipment.

At Sonatech I discovered "high tech". I worked there for a few years and continued summers and Christmas breaks when I went back to school. During this time, around 1980, Sonatech picked up an important contract called STS, "Sonar Tracking System". I began working on STS after I graduated UC Santa Barbara in 1982 and continued working on it for the next couple of years.

STS

The goal of STS was to build a single instance of a system that provided range and bearing to a "cooperative" underwater target from a nearby ship. STS was installed on AGDS-2, the USS Point Loma. The Point Loma was the Launch Area Support Ship (LASS) for the Pacific Missile Test Range. It was tasked to be on the scene when a submarine launched the then-new Trident I ballistic missile, now known as the UGM-96. In short, the purposes of STS was provide information that would allow other shipboard equipment to know exactly where the missile was going to pop out of the water. In particular, shipborne antennas could guarantee instant acquisition by being correctly aligned prior to launch. The shielded antennas are visible here and in more detail here as the "golf balls" near the bow.

How did STS work? Well, the "cooperative target", a Trident missile submarine, carried a pinger during the test which responded when interrogated by the ship. So the system needed only to ping and measure the response time from the pinger (to establish range) on two or more receivers with known spacing (to establish bearing). Given that Sonatech and its sister company International Transducer were experts at building sonar transmitters and receivers, it sounds pretty straightforward, right?

It wasn't. Nothing about the physics of sound in the ocean is simple. This is particularly true at the "head end' of the test range, the Eastern Pacific somewhere offshore from San Diego. This part of the world's oceans is known for maintaining an extreme thermocline, or temperature layering. The effect of the thermocline on sound transmission is so extreme that a ship may not be able to ping a target at a shallow depth (e.g. a submarine about to launch a missile on the test range) even if it's just a few miles away. The temperature layer creates a sort of channel that carries the sound away horizontally with no vertical penetration.

To address this, the system used a third component: a towed, manta-ray shaped fiberglass "fish" containing a transducer (the audio "speaker" of a sonar system). The fish was deployed on several hundred feet of cable to a position below the thermocline (and below the launch depth of the submarine). The fish pinged upward at the target through cooler, denser water.

The ship needed to know the exact location and orientation of the fish in order to resolve the range and bearing to the target. Of course the fish, hundreds of feet down, wasn't visible from the surface. So the ship tracked the fish by pinging it, which was possible despite the thermocline because of the sharp angle downward to the fish. The fish tracked the target. And the shipboard system was expected to compute the direct path from ship to target, the third side of the ship-fish-submarine triangle.

Both ship and fish were of course subject to yaw, pitch, and roll. Small errors in angle on both paths from the tracking sonar to the target are magnified by distance into large errors of range or bearing. So in addition to the sonar transducer, the fish contained yaw, pitch, and roll sensors...which were unfortunately sensitive to acceleration; so accelerometers were added with the intent of correcting data from the primary sensors. The shipborne equipment, too, had three-axis sensors and accelerometers, in order to correct the range and bearing from ship to fish.

Sounds a little more complicated now, doesn't it? And the whole thing had to be implemented in a shipborne 19" rack-mount box with early 1980s computer technology.

The Computers

The system was required to produce range and bearing output at a fixed rate, once every few seconds. The STS architects (not me!) designed a distributed computing system with (5) 8085-based front-end data collectors and a single Q-Bus based LSI-11 for the data processing.

The 8085 was an 8-bit CPU with 16-bit addresses. It executed perhaps 500,000 instructions per second at a clock rate of 3MHz. The 8085 boards designed for STS included the 8273 HDLC controller, a complex VLSI peripheral device; HDLC is a now-obsolete serial link communications protocol. These 5 8-bit processors were connected by HDLC to each other, and were connected (somehow...I don't remember) to the LSI-11 to provide the sonar and sensor data to the compute engine.

The 8273 HDLC controller was riddled with bugs. Sonatech eventually obtained an errata sheet (really booklet) perhaps a quarter of an inch think. My understanding is that the device was originally implemented for a specific contract in which Intel supplied IBM with controllers for IBM's proprietary version of HDLC, called SDLC. The devices were used in IBM point-of-sale (POS) terminals, after which Intel marketed them as general purpose HDLC controllers. Apparently not much testing had been done to ensure conformity with the non-proprietary HDLC protocol, which an intergalactic standard back in its day. Getting these devices to work cost the project several months.

In practice, the 8085s spent most of their compute cycles executing the custom HDLC communications software which Sonatech called CLIP (Communications Link for Interface Processors?). Written in a combination of PL/M and assembly language, the CLIP protocol code was expensive for these tiny processors. It barely left them cycles to perform their real function, which was collecting, bounding, and scaling data from the sonar and the various sensors. Floating point arithmetic, which had to be done completely in software (and in 8-bit chunks) on the 8085, was pretty much out of the question. The end result was that much of that work was pushed back on the LSI-11 based compute engine.

The LSI-11 was the VLSI-based low end of the Digital Equipment's venerable PDP-11 computer family. It was implemented as four 40-pin ICs and an optional fifth 40-pin math processor device, microcode customizations of Western Digital's MCP-1600 chip set. Digital didn't supply chips for board-level designers; all users bought systems or CPU boards designed and constructed by Digital. This implied some sort of board level connection bus, and Digital specified Q-Bus, a 16-bit wide interconnect. The Q-Bus was "open"; third parties could (and did) implement various boards for use in Q-Bus systems.

Sonatech integrated a Q-Bus system for STS. The LSI-11 was programmed in a combination of Digital FORTRAN and PDP-11 assembly language. Development was done on a small PDP-11 running RT-11. The code took the raw sonar and sensor readings, bounded, scaled, and translated them (since the 8085s couldn't), and estimated the position of the "cooperative target". The estimation code had been written by a skilled mathematician. It was very sophisticated: it used a Kalman Filter to maintain an estimate of the target's range and bearing.

Testing STS


I made three one-week trips to sea as "contractor personnel" on the USS Point Loma. I am prone to seasickness and wore scopolamine patches, which makes the memory of the three trips pretty weird ... in large doses, scopolamine is a sort of nightmare psychedelic. The first of the three trips occurred during the severe Pacific storms of the November 1982 El Niño. I was on the stern of the Point Loma, with a lifeline and life jacket, operating a winch to recover the "fish" around the occurrence of the lowest barometric pressure ever recorded in the Eastern Pacific. The Point Loma was big. 30 feet up on the wave, 30 feet down into the trough.

The system didn't work very well. The prime contractor, which I believe was Sperry Gyroscope, was not happy. It didn't work much better the second time I went out, either, although the weather was better. Each of these shakedown cruises cost millions of dollars - the cost of operating a US Navy ship at sea for a week. I'm sure we didn't have the only troubled piece of the entire Trident missile test system, but we had a budding project disaster on our hands.

My Contribution


I was straight out of school when I joined this project. I had some work experience, even experience in tech; but I'd never stepped up to taking responsibility for the outcome of a project. It took me about 6 months to get my feet on the ground, stop nibbling around the edges of the code, and try to get a grip on it.

The system architects had done one thing very well: all the sensor data arriving at the LSI-11 was recorded on a 9-track reel to reel tape. The tape could be played back in the lab at Sonatech, providing a bit for bit and second for second identical simulation against which we could run updated versions of the compute engine software.

The mathematical part of the code was very difficult, and I was loath to touch it. But I did come to understand the code's structure. It had an outer loop which ran forever. The outer loop initialized the filter and entered the inner loop. The inner loop began periodically calling subroutine DSCALE, which prepared the latest sensor readings, and then calling the filter code. If serious errors occurred, like the loss of sensor data for a long time, the code was supposed to break the inner loop and reinitialize the filter.

I said "...break the inner loop", but the compute software was written in FORTRAN IV with numeric labels and GOTO statements. There was no structured control flow. The initialization code was labeled 240. The inner loop, which iteratively called the Kalman filter with the latest data, was labeled 250.

Playing back the tape, it was easy to show that the system didn't work very well. It just made inaccurate predictions. We didn't know why.

One day, as I was looking at the code, I noticed that the bottom of the inner loop said GOTO 240. I'd probably read it several times before. "240", I thought. GOTO 240. Wasn't 240 the location of the initialization code? It was. Shouldn't that have said GOTO 250? It  should have. The filter had never been filtering: it had been reinitializing itself on every iteration. Two shakedown cruises. Millions of dollars. And a filter that didn't filter.

I showed it to my boss, one of the architects. His eyes bugged out. It was unspoken: we fix this. We don't talk about it.

The next trip out, my third and last, the Sperry representative (who was an ex-Navy sort and quite technical) was all smiles. "I've never seen the system work like this", he said.

I'm sorry, John. We just couldn't tell you.

I don't know what happened to STS. Around that time I was a player in a minor security leak; a representative of Sperry divulged the proposed date of an actual Trident launch, assuming I had the proper clearance. I did not, and I assume the Sperry guy was seriously pissed when he found out. I believe my management quietly slid me off the project and on to others, which was fine with me: I hated going out on that ship. I left Sonatech in early 1984. I don't know if STS was successfully delivered, cancelled, what.

I understand that in previous submarine missile tests, the Polaris and Poseidon missile test programs, the submarine had carried a mast with a radar reflector that stuck out of the water from launch depth. This made things easier for the ubiquitous Russian "trawlers" (spy ships). For the Trident program, they tried to avoid this with STS, which allowed them to orient the antennas without the radar mast. Maybe STS was successful. Maybe they went back to the radar mast. To this day I have no idea. I'm sure I never will.



Wednesday, December 28, 2022

Nuclear Fusion: the Big Lie

About This Post

During December 2022 I began reading about nuclear fusion - I mean controlled fusion for electric power production. I've discovered that we've all been told a Big Lie. It's the kind of deep, structural lie that becomes the consensus story of a group of experts in their communication with the public and goes on for decades, making it difficult for any single expert or small group to correct.

The big lie is simple: that the fuel for nuclear fusion is cheap and limitless. The truth is different, so different it's almost impossible to believe: the fuel for nuclear fusion here on Earth is fantastically expensive and desperately constrained - so desperately that it's unclear how we can bootstrap a fusion power industry at all.

If you are curious about this and want to know more, I hope you'll read on. I'm confident that everything I write here is on a solid technical foundation; there are references at the end, including references to original scientific papers published by the world's leading experts on this topic.

Just a Little Physics


Hydrogen is the most plentiful element in the universe, and the Sun fuses ordinary hydrogen to make heat; this much is true. But no fusion reactor ever seriously contemplated on Earth has even considered using this process. The required temperatures are far too high to achieve in "magnetic bottles" or by "laser confinement" or any of the other techniques that have been investigated during the past 75 years, even in the cores of our weapons. Equivalently, we can say the reaction rate is much too low to be useful.

Fusion reactions that occur here on Earth - and this includes our weapons - rely on the fusion of unusual variant forms ("isotopes") of hydrogen. There are two, deuterium and tritium, so the reaction is often called "D-T fusion." Deuterium is naturally occurring, common enough, and not radioactive; it can reasonably be characterized as "cheap, safe, and limitless".

Tritium is a different story.

Tritium doesn't exist naturally on Earth except in trace quantities. It's radioactive, and has a fairly short half life (12.33 years) so stocks of it simply disappear over time. It can be made, but only in particle accelerators and nuclear reactors, at great cost; its current retail price is around $30,000 / gram. The world's entire inventory is much less than 100 pounds. Existing stocks were created by an aging fleet of nuclear (uranium fission!) reactors of a specific design, mostly in Canada, at the rate that exceeds the inevitable radioactive decay by maybe half a kilogram (about a pound) a year.

Fusion power reactors will require kilogram quantities. The first large fusion facility, ITER, now under construction in southern France, is scheduled to consume most of the world's inventory. ITER does not make electricity (it's a science experiment). It will severely deplete the world supply, leaving essentially no tritium for follow-on projects. There are no clear plans in place to create any tritium to bootstrap follow-on reactors.

I know this may be difficult to believe. It sounds like the world's fusion scientists are impossibly stupid. There are many reasons for this, and the full story is too long for this blog post. But there is a candle of hope, and it's bound to be mentioned by any defender of the fusion power industry.


Tritium Breeding


In the previous section I said tritium could be made "...in particle accelerators and nuclear reactors." This includes nuclear fusion reactors; they can, in theory, "breed" their own tritium. They can do this by splitting atoms of another element, lithium, into atoms of tritium. In theory, again, this can be done continuously, in a lithium "breeder blanket" surrounding the reaction chamber, making them self-sufficient.

This "continuous tritium breeding" concept is the only solution to the worldwide tritium shortfall that has been widely discussed. Unfortunately, it has many issues. First and most obviously, it doesn't begin to address the startup problem, which I've called "bootstrapping." Large D-T reactors will require a lot of tritium to start up, and they can't breed any tritium until they're operating. It's currently unclear how the world will get past the tritium shortfall in the late 2020s and 2030s, as the unique Canadian reactors reach their design lifetimes and ITER consumes most of the world's existing stockpile.

Second, the breeding ratios are not very high. The "doubling time", the time required to breed enough fuel to start a second reactor, is probably measured in years (yes, years). See the Notes for references on this.

This leads to the third issue: the difficulty of breeding sufficient tritium places a challenging set of constraints on the design of fusion reactors, which were already some of the most complicated things that people have ever tried to build. Tritium is extremely difficult to handle. It's radioactive--every part of the tritium handling system will become low-level radioactive waste. It gets into everything--it even diffuses through steel plates! The design of the tritium recycling system will affect every part of the reactor design.

Perhaps worst of all, fusion reactors will have to operate almost all the time to create the fuel required for their next restart plus a small surplus. They'll have to achieve extremely high availability numbers. And I mean they will have to - there will not ever be sufficient tritium to restart them again and again without long periods of operation between the restarts to breed the necessary tritium surplus.

This is an incredibly high barrier for a new technology that was extremely challenging to begin with.

The Bottom Line


After carefully reviewing the facts (see my notes below), I've come to believe that ITER, which has a design lifetime measured in days and is not designed to produce electricity, will not be the prototype of a new generation of power plants. Rather, ITER will be the last D-T fusion plant ever constructed. The bootstrapping problem seems insurmountable to me: in practice, governments would need to invest billions of dollars in specialized nuclear reactors designed to make the tritium required to bootstrap the next generation of fusion reactors. And they'd need to start now, because building up our tritium stocks will require decades.

But to argue for these facilities, fusion's community of experts would need to unwind their decades of lies about "clean and limitless" fuel. They'd need to appeal to governments and investors for billions of dollars to build specialized facilities to breed fuel.

This is not going to happen. Rather, what we can expect over the next several years will be more like the popping of a large, decades-long bubble. Funding for D-T fusion will dry up, leaving only minimal funding for more exotic concepts (see notes). These exotic alternatives require much higher temperatures and/or compressions, making them likely to become power alternatives in the 22nd century rather than the 21st.

Notes


The best overall writeup I've found about this topic is on the web site of the highly-respected journal of the American Association for the Advancement of Science (AAAS), Science, posted June 2022.

The best current example of history of lies about "cheap and limitless" fusion is the on the home page of Commonwealth Fusion Systems, a well-funded D-T fusion startup spun off from MIT. It says "One glass of water will provide enough fusion fuel for one person's lifetime." This is a flat-out lie - a glass of water won't fuel any fusion at all, here on Earth. There's no tritium in a glass of water other than the trace amounts that are occasionally created by cosmic rays.

This entire story, along with other debunking of the fusion lies I haven't even mentioned, was substantially driven by Steven Krivit and his web site, New Energy Times. Mr. Krivit is an interesting fellow; his site was formerly dedicated to coverage of "Low Energy Nuclear Reactions" (LENR), the topic previously known as "cold fusion". But when the latest surge of interest in "cold fusion/LENR" began to die out during the 20-teens, he turned to debunking the "conventional" fusion industry. He began by debunking the false power production claims of ITER and more recently moved on to this topic, fusion fuel.

Mr. Krivit has done more than any other single individual to raise the world's awareness of these issues. His site contains quotes and videos documenting the enormous extent and duration of the Big Lie. You can also watch this excellent video, linked from Mr. Krivit's site, which makes most of the points I've made here.

Next, there are some Wikipedia links. You can read about the unique Canadian reactor design which has serendipitously created the world's entire tritium inventory. And you can read about the ITER project, an ongoing multinational effort to build a "tokamak-" (donut-shaped) D-T fusion reactor in southern France. I'm engaged in trying to get a false statement about "fusion processes of the Sun" removed from the first paragraph; you can see my comment on the Talk page. You can also read about ITER on their own site.

And what about our weapons? A surprising amount is publicly known - because it has leaked, or is deducible from the physics, or has been declassified. Weapons do use small quantities of tritium for a "fusion boosting". The Department of Defense contracts with the Tennessee Valley Authority to place lithium rods in a couple of the TVA's uranium fission power reactors, periodically collects the rods, and reprocesses them to gather the generated tritium for its needs (which are small, a few grams per weapon, refreshed every few years). You can read about fusion boosting.

Large "hydrogen" (really "thermonuclear") weapons do perform D-T fusion, but don't package tritium. Instead, they package lithium and deuterium in the form of "lithium deuteride" with an atomic bomb as the trigger. During the detonation, the immense neutron flux creates tritium from the lithium and D-T fusion follows. This option obviously isn't available for power reactors since they don't have such a large neutron flux - they're not triggered by atomic bombs.

So the Defense Department avoids the need for an inventory of large quantities of tritium. All the nations that have thermonuclear weapons seem to have hit on lithium deuteride as the fuel. You can read about this too.

From this design and other knowledge of the US nuclear weapons program, we can take away an important fact. Lithium, like hydrogen, has two stable isotopes. Only one is useful for breeding tritium, and it's the less common of the two. It's necessary to perform isotopic enrichment of natural lithium to create the stuff that ends up in the breeding blanket, and isotopic enrichment is inherently difficult - both variants are "lithium", so it's necessary to exploit their subtle differences to change their proportions. The US used the environmentally nightmarish COLEX process during the 1950s to create the stockpile of isotopically-enriched lithium that (when mixed with deuterium) has provided the fuel for our thermonuclear weapons ever since.

Which brings us to the scientific papers, the first of which is about lithium isotope enhancement. You can read the abstract and the introduction to confirm several of the points I've made here. A description of the fusion process used by the Sun (but nowhere on Earth) is on the Hyperphysics site.

But the most important scientific paper by far is the work of M. Abdou, a physics professor at UCLA. Prof. Abdou is without doubt the world's leading expert on the topics I've discussed with nearly 40 years of experience. For this paper he assembled a diverse team of experts from many disciplines. The paper is really an engineering feasibility study, not a physics paper; it's difficult because the engineering concepts are very involved, but it doesn't demand extensive knowledge of physics.

You can also read conference presentations by Prof. Abdou. [Update: due to link issues, visit this page https://www.fusion.ucla.edu/presentations/ and click on the second of the three presentations for 2022.] Skip directly to his "concluding remarks" on slides 30 and 31. For a person of Prof. Abdou's stature, these conclusions are apocalyptic: he's saying, bluntly and directly, that the worldwide community's present path (i.e. the "DEMO" production scale fusion reactor currently planned to follow ITER) is impossible; that the community must realign around production of tritium. As I mentioned above, I don't think this is likely. Rather, I think the effect will be more like the popping of a bubble, and progress toward workable fusion reactors will be set back by 30 years, if not forever.

Finally, there are a few fusion companies doing research on fusion that doesn't use tritium. Every one of the alternative fusion reactions requires higher temperatures and/or pressures than D-T fusion, so these approaches are all more speculative (in the main text, I used the term "exotic"). One example is Helion, a privately funded company that proposes to fuse deuterium with helium-3 (which has to be bred in the reactor because it's also not found on Earth in substantial quantities). [Update: here is a video about Helion.]  TAE Technologies proposes a different set of alternative physics. 

I hope one of these startups succeeds, because I don't think D-T fusion will.

YARC (Yet Another Retro Computer)

 Over the past two years I architected, designed, and built YARC, a 16-bit computer. It looks like this: As you can see, it's implemente...