Operationalizing complexity
Often when we say a system is complex, we are not really naming a property of the system so much as the absence of its complement, simplicity. We might also, implicitly, be naming any of several properties familiar to the control theorist: robustness, instability, autocatalysis, self-replication, spatial localization, decentralization, multiple scales, multiple agents, multiple feedback loops, lossy communication, mutation, evolution, learning, adaptation, and so on. Complexity may be more than the absence of simplicity, but a universal notion of complexity would include many properties and exceptions, while the absence of simplicity can result from adding any single property.
So rather than defining complexity, we can develop a working definition of simplicity. Let's say simplicity is a property of a system whereby, if we perturb some components or behaviors X, we know exactly or at least directionally what will happen to some components or behaviors Y. An interesting (and nontrivial) consequence of this definition is that a system can be simple in some descriptions and complex in others. Typing a letter onto paper using a typewriter and typing a letter into a web editor using a laptop are both simple in the map from keystroke to letter, but the laptop is complex in the map from laptop bits encoding a keystroke to server bits encoding a letter.
Defining complexity in opposition to simplicity saves us the trouble of asking whether any property is a property of all complex systems, or even if it is a universal property of the complex system we’re looking at right now. We just want to know whether the particular X to Y map we’re looking at is simple or not. If not, we want to know what properties are complex-ifying that map, and whether we have the mathematical tools to make it simple. Plainly, maps that are not simple by this definition are difficult to query and design around, so they pose a meaningful epistemological, mathematical, experimental, and engineering challenge.