
By Garrett Seeley
Recently, our team worked on a 3D reconstruction system without a monitor, yet it contained a powerful video card. This type of configuration is called a “headless” system, as there is no way to see what the reconstruction system is doing without additional equipment. Normally we add a monitor or use remote desktop access to see the headless system’s operations. Regardless, it is a system without a direct display. Let’s discuss why a high-end video card is present in a headless system.
Consider normal data flow for X-ray imaging systems with reconstruction. In such X-ray systems, an X-ray detector captures a digital image and forwards the information to the 3D reconstructor. The reconstructor then assembles the raw data into the required 2D or 3D image. Afterwards, it sends the results to the control system for final processing, technologist review and archiving. This is all accomplished using a headless 3D reconstructor with an open-source operating system (Linux) and a high-quality, expensive video card.
The odd part about this model is that the video card has just as much or even more processing power than the system it is connected to. This explains why a computer that doesn’t use a display may still need a powerful video card. This could even be true if the video card itself costs half as much as the overall computer system.
BULLDOZER VS. MANY SHOVELS
The most important concept with 3D reconstruction, and with AI-powered systems, is that they do not handle data in the same way a typical desktop computer would. To put it into an analogy: A CPU, the main processor of a computer, is big and powerful. It can perform intense calculations and process extremely difficult tasks. It is raw power, like a bulldozer. This is impressive and great for heavy calculations. However, this is not what video systems require.
In video systems, processing must focus on the needs of each pixel in a display. It takes a lot of small calculations to reconstruct pixel data. It is best to approach this task with an army of small processors, called video cores. These are not like CPU cores in that they are not designed for large processing tasks but rather for small tasks. In a video card, there are over 1,000 cores. These may be called GPUs or Graphics Processing Units. A CPU, by contrast, currently has about 4 to 16 cores.
The GPUs are far less powerful per core, but they excel in numbers. The theoretical paths for data in a processor, called threads, are usually around 1.5 to 2 times the threads compared to the number of cores. This means that while a CPU can perform dozens of complex tasks simultaneously, a GPU can perform tens of thousands of simple tasks in the same timeframe. This is the processing advantage of a video card. It excels in performing smaller calculations.
For this reason, GPUs are also very good at working with large language models for AI systems. GPUs started out as strict video task processors. Recently, programmers have found that it is possible to write software for video cards, giving them tasks that were previously only for a CPU. For example, bitcoin mining is usually performed on several video cards as opposed to a powerful CPU. The CPU was simply the wrong tool for the job. The same is true for AI systems. To simulate thought in AI systems, it requires computing threads to mimic minds. The work is not difficult, but the number of information paths required is staggering. This is why video card companies have recently made trillions on AI systems. Video card GPUs are far better than CPUs for AI systems.
The question is about what type of processing is best for the application. Does the task require a CPU to come in like a dozen bulldozers, or is it best for a video card to come in like an army of 10,000 shovels? This is why some tasks are best performed by a video card rather than a CPU.
VIDEO CARD FLEXIBILITY
As a video card is traditionally added to a computer system, they are expandable, and several cards can be installed and even linked together. The effect is that it is relatively easy to expand the computer system for GPU tasks. Additionally, video cards themselves have built-in memory. Due to video card manufacturers using proprietary, custom card builds, video cards are usually equipped with faster RAM than the main computer system, often leading CPU systems by one or two generations (DDR6 vs. DDR4 RAM). This means that it is possible for a computer system to have more VRAM than main system memory. Given the requirements faced in 3D reconstruction, AI, and even virtual reality systems, a video card becomes a clear choice for expanding and upgrading a system. This explains why video card demand and costs have ballooned over the past decade.
WHAT THIS MEANS FOR MEDICAL SYSTEMS
In the future, we will see more systems functioning with multiple video cards, even if there is no need to run a display on the system. The cards may not be connected to anything visible, because that isn’t their primary task. The system software will utilize the card for additional, small-scale computational tasks other than displaying information. It’s all about the number of simultaneous tasks over the size of the task. Ironically, a video card was originally designed to assist a computer in its display tasks. Video cards are slowly being integrated into main computer system operations. Expect to see this in more medical applications in the future.

