
In a previous post, I gave a quick rundown of our three basic design principles. This time, I want to dive deeper into one of those principles: K.I.S.S.
Serviceable By Design
A Formula One pit stop takes less than 3 seconds to complete. This is, in no small part, due to highly trained technicians. But just as importantly, it’s because the car is serviceable by design. Getting the car serviced and back in the race quickly is important, and components like a single locking lug nut per tire make it easier for the pit crew to get their jobs done quickly.
The same is true in system design. System maintenance can impact performance and slow work down. Unplanned outages can grind work to a halt. Anything that can be done to speed up or avoid these incidents will have an impact.
Simple design makes it easy to quickly identify and repair problems. Repair technicians can easily find the broken component. In complex systems, many different components need to be checked and tested. Simple design allows for usable monitoring and alerting tools as well; without multiple components it is easier to keep an eye on the system.
There Were… Complications
Systems can perform complicated tasks and be used for complicated work, but that doesn’t mean they need to be complicated themselves. There is an elegance to solving complex problems with simple solutions. Adding complexity introduces interdependencies. A software upgrade on one component changes the way it works just enough to break the connection with another program on the server, which breaks the connection to a workflow, which means you can’t get work done.
Upfront Cost Saves Downtime Later
Complex systems are often born out of cost-saving measures. Open Source applications that do one specific job can be chained together in a complex pattern. That can be very appealing when building a system on a budget, but the downtime associated with broken, complex, systems will often outweigh the saved deployment dollars.
A Complex Example
A manufacturing company has a line-of-business application to manage inventory and orders. The system is a pair of servers in different office locations (one on the East coast and one on the West coast.) The application processes orders from the website and handles inventory updates from the warehouse through email. Each office has an email address being watched for messages with attachments. When an attachment comes in, a script running on the server saves the attachment to a folder. When files are saved to that drop-off folder, another application sends a copy over to the other office’s drop-off folder, keeping the two sides in sync. Then the application reads the files to import them to its database and then delete the files.
The employees don’t see these steps. They only work in the line-of-business application and know that when orders get placed or inventory is refreshed the software updates.
This has potential to break in many ways. If the script stops running properly, if an email isn’t formatted correctly, if the sync software crashes, or if the application doesn’t remove the file properly – the data will be incorrect. Each of these components needs to be individually monitored and identified when something doesn’t work as expected.
The Simple Solution
A better design for this would be to ditch the local servers and run the entire system in a public cloud such as Azure or AWS. The application and web developers can work together so that the line-of-business application and the website use the same database structure. By running the application online, there is no need to replicate data between coasts as both offices can access the same server remotely.
Conclusion
Simplifying the number of moving parts in a solution can cost more up front but saves time and money down the line when compared to unpredictable outages caused by needlessly complex systems.
Do you have a system that is too complex for its own good? Let us help you streamline and simplify it.