Tthe value of a good model that relates the IT infrastructure to the business activity shouldn't be underestimated. Business people live by their sales forecasts and such, and they're the ones who sign the cheques. They're much happier if IT people can tell them how much it'll cost to support their business ahead of time, rather than running begging for upgrades when everything goes pear-shaped.
It's much better to have a model to do this rather than run on gut instinct. It is probably another instance of poor communication between geeks and business people, but for various reasons¹ upgrade requests are often ignored until it's too late (or nearly so). A model that can be presented and gives a (seemingly) unbiased set of predictions is more useful than a suggestion that the business will need a new server sometime in the next six months.
I've seen (and built) these models based on various data sources; historic performance, correlation of statistics, public benchmark results etc. They usually end up with a relatively complex, highly parameterised spreadsheet that offers the best possible guess for all system components based on all available data and, if you're really pushing it, error margins on the prediction.
However, by far the most effective method is to find one or two simple rules of thumb that relate business activity to resource use, round them up to nice numbers and give them to people who sign cheques. For example "One disk for each business transaction per second", "4GB RAM for each million customers", "A server for each 100 concurrent users".
These sorts of model often arise naturally, after a lot of painful hardware upgrade negotiations. What I'm now becoming convinced of is that one should actively seek out simple rule-of-thumb models for hardware sizing from the earliest possible time.
People will internalise those rules and seem quite happy to accept them as a law of nature, and your hardware requirements will be signed off. If you offer too much (that is, nearly any) options, caveats, or other complexity then the discussions will get bogged down in analysis and justification, the upgrades will be delayed, performance will suffer and the IT department will look bad again.
¹ The reasons bear some further examination. The most obvious is the parties speak a different language, one in terms of sales per day or customers gained per month, the other in terms of CPU utilisation and IOPS. Beyond that, partly it's a suspicion, justified or not, of politics or empire-building, why would someone ask for money to be spent on their domain unless it benefited them? Also, there's an element of business types automatically correcting for hyperbole, a factual statement may be taken as an exaggeration (some IT managers compensate for that by exaggerating). Finally, I suspect that geeks have an understanding of queueing theory, either explicitly or more usually simply implicitly from experience. It seems that it's not intuitive that a system that's 90% busy has twice the wait time of one that's 80% busy (though given the same applies to traffic at road junctions, I'm not sure why more people don't realise this).