Adam Niederer

Software Performance on Prospective E-Waste

I proudly still use my 2012-era i5 and spacious 120GB SSD. These parts have been through a lot, including falling down a few flights of stairs, but they're still ticking thanks to a bit of creative usage and clever engineering. The most important thing, though, is that they have not yet become e-waste.

The first step of Reduce, Reuse, Recycle is to reduce. Assuming the average person is on a roughly 5-year upgrade cycle, extending that by one year reduces pressure on natural resources and e-waste disposal facilities (read: landfills) by 20%.

Eventually, hardware will become too bereft of features to reasonably use. There's a reason I'm not still on a Pentium 3. However, these hardware transitions often happen on the order of tens of years. Despite being released more than ten years ago, AVX is still not required to run almost any program. A Radeon 7000-series card from 2011 will still run all but the most cutting-edge applications today. Even if this weren't the case, the average user isn't concerning themselves with instruction set extensions or API feature levels anyway.

In order to reduce performance-related upgrade pressure on users, there are quite a few things developers can do. Combined with a bit of tactical decision making on the part of the user, significantly delaying purchases of additional hardware which result in the decommissioning of existing hardware is possible.

Delaying the Upgrade

Most users will only upgrade their hardware when there is a perceived need to do so. Telling people to be patient while waiting for a loading bar isn't a great way to effect any kind of change, so the onus lies on software developers to make their programs work as well as possible on slow hardware, and steal seconds from wherever they can to reduce the amount of time a user is waiting for you.

Batch Processing

As unintuitive as it may sound, allowing a user to group a bunch of small jobs into a larger job that takes longer to run can be massively beneficial. Because humans are so bad at context switching and remembering things in the medium term, it is often less of a mental burden to run something for thirty minutes, than it is to run something for three minutes, then run another thing for three minutes, ad nauseum. After a certain point, maybe 20-30 seconds, it makes more sense to clump as much work as possible together and let the user meaningfully switch to another task while the computer plugs along in the background. This usually maps well to initial or final stages in a workflow, such as exporting multimedia or setting up a software package. The less time one has to spend babying a render menu or an installer, the better.

Humans also have this great thing where they're usually unconscious for eight hours per day, which can be used to run expensive jobs. Of course, the one problem is that the user has to remember to kick off the job right as they're about to go to bed or get off work, which isn't usually the best time to have somebody remember something. Both Windows and Linux have means of scheduling jobs to be run at certain hours of the day, and both are also capable of waking the system from suspend states to perform the job. Software should be able to take advantage of these mechanisms to perform large jobs. A job that takes eight hours actually runs instantly from the perspective of a user who was unconscious during its runtime!

The overnight technique is best applied to jobs which take anywhere from 30 minutes to 16 hours - long enough to not want to complete right away, but short enough to not bleed into the next day's work.

Automatic Job Creation

A user doesn't have to be asleep to not notice longer runtimes - we can use time where they're distracted, as well. By starting jobs that we know a user will eventually want to run without the user's prompting, all of the time taken before a user thinks to kick off the job is effectively free.

For example, if the relevant subset of a test suite takes two minutes to run, having it watch files and run as a lower-priority job in the background on change can reduce the perceived runtime in many cases, because it will have already done 30-60 seconds of its job by the time the user has ensured that the program compiles and lints. If the user needs to verify something manually as well, it's likely that the test suite will have finished by the time the user will want to look at it.

Responsiveness by Brute Force

There are some situations where doing much more than the user wants can make the user think you're doing less. For example, web applications which load data for each new page can load that data when a link to that page is hovered, so that the data has a 500ms head start before the user clicks the link. Serendipitously, devices which are often on metered data connections frequently do not have any means of hovering over a link.

This technique is usually best applied to jobs that don't take very long, but are frustrating to wait for, like loading data on a webpage or sorting/filtering long or unoptimized lists.

Efficient Realtime Modes

Many applications benefit from rendering at incredibly low preview resolutions on old hardware, such as 360p-540p, and could use upscaling techniques to improve preview quality or quicken responsiveness without. Heck, a ton of video editing applications usually have the previous and next second of video sitting in a buffer next to the current frame, and video codecs are often able to estimate motion vectors. Video compositing software has the real motion vectors for its animations. Perhaps temporal upsampling could be applied here for even better quality.

Most applications with low-level control of their rendering stack already have faster methods of rendering previews: CAD software, video editors and compositors, game engines, and 3D modeling software all do this. However, I'll add that these modes should be the very first target for optimization and acceleration, because this is the environment in which a user actually does their work, and is often reliant on information presented in this view. Knocking export time down from four hours to one hour is impressive, but that can be done overnight; Taking a four-second preview time down to one second will save a lot more human time. This is a place where environmental impact and business efficiency both benefit from improvements; users of software with preview windows often command high salaries.

This also applies to software compilation and reloading; Although we don't talk about it in terms of frame rate or resolution, a webpage or application that can intelligently hot-reload on changes is much cheaper to work on than one that requires a full reload, or, god forbid, a recompile. Much like the preview window, it also places less upgrade stress on the user.

Hardware Acceleration

Almost every computer built in the last ten years has 256-bit SIMD units, some kind of graphics unit that will run some kind of OpenGL and OpenCL, and a hardware video decoder. An old computer that uses any of these hardware facilities can outrun a modern computer that doesn't. This is most obvious in the field of video games, where dedicated graphics hardware from fifteen years ago will still run circles around modern processors' software rendering capabilities. Supporting these in the default configuration helps power users and users of old hardware alike.

Minimizing the Upgrade

Of course, a user will eventually have to get new hardware. Be it for new features, or better performance, the goal once an upgrade is inevitable is to keep old hardware in use until it is physically incapable of functioning.

Working Hardware to Death

LVM lets one pretend multiple drives are actually one larger drive, and can also use lower-capacity flash or x-point storage as a cache. Having older storage hardware is also a great motivator to thoroughly back up one's data. To mitigate annoyance, one can keep a volume group of older hardware, upon which only easily-restored data is kept. Movies, games, and pictures are all good candidates for this volume group, since one can restore from yesterday's backup with a simple file copy or re-download. Near-death flash storage can be set as a write-through LVM cache, to avoid data loss in case of failure.

Older compute resources can be put to use in more limited roles or given to those who can still find use for them. Any once-powerful PC from the last twenty years can handle presentation software and word processing.

One thing to note is that power supplies may not be the best components to work to death, since they can damage other components or present a safety hazard upon failure. They're also quite dangerous to repair, so decommissioning them before they are likely to fail is a good idea, in my opinion.

Multi-Generational Hardware

There are quite a few PC components which are overwhelmingly unlikely to become obsolete, such as fans, heatsinks, cases, and peripherals. These parts do not directly contribute to the speed of one's computer, so buyers who wish to reduce e-waste should opt for parts which have extremely long-term support, and are built to be durable.

For example, Noctua's SecuFirm heatsink mounting hardware is supported on sockets from LGA775 to AM4. This kind of support would allow one to use the same heatsink for fifteen years, only discarding the mounting hardware instead upon upgrade.

On Energy Usage

Old hardware is obviously much less efficient than new hardware. Even though an old Sandy Bridge chip consumes a third of a new Zen chip's power envelope, the Zen chip will finish far more than three times earlier. However, the energy required to refine and transport natural materials is also costly and often highly fossil fuel intensive, while many communities source some portion of their electricity from nuclear or renewable sources.

If you live in the United States, you can see your state's energy source composition here, and do your own cost/benefit calculations.

On Social Pressure

At the beginning of this piece, I mentioned that some individuals and many corporations upgrade hardware way more frequently than may be necessary. Although I am leaving the corporate angle to the think tanks, this article does not cover another pressure people feel: social pressure.

Software developers have a long history of making disastrous social suggestions, so I won't put forth any specific answers, but I think the problem statement is clear: We as a society currently laud frivolous consumption, and often treat technology like fashion. Unfortunately, unlike most fashion, one cannot dispose of e-waste without significantly polluting the environment.

In an ideal world, the followup question to "Is that a new phone?" would be "Oh, how'd your old one break?", but unfortunately that's outside of an engineer's wheelhouse.