Ashby Diagram Software
[V]ariety can destroy variety
Selected materials are shown in a 2-dimensional chart, called the Ashby diagram, in order to view those with the highest performance index. These diagrams often contain also nanostructured materials and composites. Materials Selection for Mechanical Design – standard text used around the world.
Ashby Diagram Software Free
W. Ross Ashby- Example problems with answers, and a valuable programmed learning course on phase diagrams. Engineering Materials 1-Michael F. Ashby 1996 Materials Selection in Mechanical Design-M. Ashby 1992-01-01 New materials enable advances in engineering design. This book describes a procedure for material selection in mechanical design, allowing the.
- Students to generate Ashby charts as well as explore other aspects of the software such as the material records, which contain information such as cost, material properties, durability, etc. The deliverable for the lab was a completed worksheet of basic materials selection problems.
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.
In his book An Introduction to Cybernetics, published in 1956, the English psychiatrist W. Ross Ashby proposed the Law of Requisite Variety. His original formulation isn’t easy to extract into a blog post, but the Principia Cybernetica website has a pretty good definition:
The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.
Like many concepts in systems thinking, the Law of Requisite Variety is quite abstract, which makes it hard to get a handle on. Here’s a concrete example I find useful for thinking about it.
Imagine you’re trying to balance a broomstick on your hand:
This is an inherently unstable system, and so you have to keep moving your hand around to keep the broomstick balanced, but you can do it. You’re acting as a control system to keep the broomstick up.
Ashby Diagram Software Downloads
If you constrain the broomstick to have only one degree of freedom, you have what’s called the inverted pendulum problem, which is a classic control systems problem. Here’s a diagram:
The goal is to move the cart in order to keep the pendulum balanced. If you have sensor information that measures the tilt angle, θ, you can use that data to build a control system to push on the cart in order to keep the pendulum from falling over. Information about the tilt angle is part of the model that the control system has about the physical system it’s trying to control.
Now, imagine that the pendulum isn’t constrained to only one degree of freedom, but it now has two degrees of freedom: this is the situation when you’re balancing a broom on your hand. There are now two tilt angles to worry about: it can fall towards/away from your body, or it can fall left/right.
You can’t use the original inverted pendulum control system to solve this problem, because that only models one of the tilt angles. Imagine you can only move your hand forward and back, but not left or right. Because of this, the control system won’t be able to correct for the other angle: the pendulum will fall over.
The problem is that the new system can vary in ways that the control system wasn’t designed to handle: it can get into states that aren’t modeled by the original system.
This is what the Law of Requisite Variety is about: if you want to build a control system, the control system needs to be able to model every possible state that the system being controlled can get into: the state space of the control system has to be at least as large as the state space of the physical system. If it isn’t, then the physical system can get into states that the control system won’t be able to deal with.
Bringing this into the software world: when we build infrastructure software, we’re invariably building control systems. These control systems can only handle situations that it is designed for. We invariably run into trouble when the systems we build get into states that the designer never imagined happening. A fun example of this case is some pathological traffic pattern.
The fundamental problem with building software control systems is that we humans aren’t capable of imagining all possible states that the systems being controlled can get into. In particular, we can’t imagine the changes that people are going to make in the future that will create new states that we simply could not ever imagine needing to handle. And so, our control systems will invariably be inadequate, because they won’t be able to handle these situations. The variety of the world exceeds the variety our control systems are designed to handle.
Fortunately, we humans are capable of conceiving of a much wider variety of system states than the systems we build. That’s why, when our software-based control systems fail and the humans get paged in, the humans are eventually able to make sense of what state the system has gotten itself into and put things right.
Even we humans are not exempt from Ashby’s Law. But we can revise of our (mental) models of the system in ways that our software-based control systems cannot, and that’s why we can deal effectively with incidents. Because of how we can update our models, we can adapt where software cannot.