Broken Symmetry

Just another WordPress.com weblog

Archive for June 2008

A Neuroscientific Explanation for the Hypothesis of Periodicity and the Synchronized Flow Theory of the Firm?

leave a comment »

Regular readers know that I’ve been exploring an extension of the traditional economic theory of the firm.  Specifically, I’ve been exploring the following hypothesis:

A firm will emerge when a single team of people can synchronize the
flow of supply to demand at lower costs than two independent teams
relying on market price signals alone.

"Flow" here means a unit quantity of goods per unit time.  You can read the original post, in which I place the hypothesis in the context of earlier work by Coase, Alchian & Demsetz, and Williamson, here.

To my surprise and joy, it turns out that there have been some discoveries in biology and neuroscience over the past few decades that have some very interesting affinities with the synchronized flow hypothesis.  Specifically, relatively new work on mitochondria, neuroplasticity, mirror neurons, and collaborative memory seem to fit well with the theory of synchronized flow.

The Connection between Mitochondria and the Hypothesis of Periodicity

First, the synchronized flow hypothesis was inspired by the observation that human consumption and production occur in cycles with a measurable frequency distribution.  Integrated up into cumulative distribution functions, these frequency distributions can be shown to be equivalent to aggregate supply and demand for a population within a window of time.  Over the past few days, I’ve been reading Nick Lane’s recent book about mitochondria (thanks to Tyler Cowen for the recommendation), and have discovered that, over a fairly wide range, a power law applies to the basal metabolic rate of mammals.  In other words, some network topology is responsible for the periodicity in our metabolism.  Since mitochondria are responsible for quite a bit of our metabolism, the hypothesis of biologists is that it is the network topology of mitochondria that determines the rhythms of metabolism, at least for mammals.  In other words, our network of mitochondria provides a biological mechanism for explaining the hypothesis of periodicity.

Mitochondria and Neuroplasticity

As if that weren’t enough, it appears that mitochondria are also responsible, at least to some extent, for neuroplasticity.  See here for a description of the function of mitochondria within the brain.  Neuroplasticity is the phenomenon whereby certain neural pathways are reinforced by experience.  I started reading about neuroplasticity in this book by Norman Doidge, in which he describes (among other things) how blind people have been taught to see through the use of cameras hooked up to tactile feedback transducers.

Are you with me?  So far we’ve got a biological mechanism for explaining why we observe particular rhythms of consumption and production, which happens to be related to the mechanism that permits us to learn by repetition.

Neuroplasticity and Mirror Neurons

Another result we’ve gotten recently from neuroscience is that there are certain neurons in the brain that will fire both when we do things and when we see another person do the same thing.  If you pair that up with neuroplasticity and the mitochondria, what you’ve got is a mechanism for us to teach and learn from one-another, including a mechanism for indirectly influencing one-another’s rhythms of consumption or production at a biological level.

Consider this: if I perform zen meditation in front of you, because of mirror neurons, your brain is going to start firing along similar pathways to the ones that my brain is firing along while I’m in meditation.  The result is that you are going to be more likely to want to do zen meditation and in fact, you may get some of the benefits even just from watching me do it.  Zen meditation, of course, is a pretty benign example of the power here.  Investors should see immediately how much this means.

Collaborative Memory

Has anybody looked empirically to see whether any of this neurophysiology is borne out at the level of psychology — much less at the level of economic hypotheses (such as rationality)?  In fact, yes.  Here is a summary of work done on "collaborative memory."  The surprising results of this work are the following: certain cognitive tasks are negatively affected by collaboration, including group brainstorming for word-list retrieval.

In an earlier post, I explored a theory of why that might be, based on the results of other neuroscience research reported in Daniel Goleman’s new book, Social Intelligence.  My thought is that the synchronized flow theory can be explained on a neursocientific basis as the result of comparative institutional advantages in how and what information is communicated in order to synchronize supply and demand.  In particular, the thought is that integration of firms will be favored when mirror neurons and plasticity will lead to better flow synchronization.

Obviously, this is all speculation at this point.  But it’s really fascinating stuff, and I couldn’t help throwing it out there to see whether there was anybody else who would be similarly interested in it.  I realize that the connection between these various steps is fairly tenuous.  Yet lots of research has been done at each step.  If you simply connect the dots, you’ve got a fairly solid biological explanation for why groups of people choose to work together on one set of tasks, but not another.

Written by Michael F. Martin

June 30, 2008 at 5:00 pm

Accounting Information as Political Currency

leave a comment »

The Harvard Law School Corporate Governance blog reports on research from Ramanna and Roychowdury:

We test whether outsourcing firms understated profits in the period
leading up to the 2004 election, in circumstances where the firms’
affiliated candidates were in competitive races. Understating profits
can help deflect attention away from the firms’ outsourcing activities,
and thus spare the candidates considerable embarrassment. We find that
outsourcing firms donating to congressional candidates in closely
watched races managed their earnings downwards in the two quarters
immediately preceding the 2004 election. We find no evidence of
downward earnings management among outsourcing corporations donating to
congressional candidates not in closely watched races.

I haven’t read the paper carefully enough to have an opinion on the merits.  I would like to point out, however, that requiring firms to state average daily debit and credit flows from balance sheet accounts might deter this kind of behavior.

Written by Michael F. Martin

June 30, 2008 at 3:12 pm

What the Design of SUVs and Cost Accounting have in common

leave a comment »

The design of both was predicated on an unlimited supply of cheap fuel.

When SUVs were designed, the costs of fuel were low relative to the joy that people got from driving them relative to other kinds of cars.  Nobody cared about miles per gallon.  What they wanted to know was how big and how fast it could go.  Don’t get me wrong.  I love cars that growl and shake when you start them.  But that growl and shake come literally at the expense of fuel efficiency.  The design loses its appeal when the price of fuel starts to cut into my enjoyment of other things.

Similarly, cost accounting was designed to measure and report activities within a firm at a time when "fuel" (i.e., capital, labor, and raw materials) were very cheap relative to the demand for its products.  Globalization supercharged that trend, and kept the flawed model of cost accounting on life support.  When labor got expensive, we simply moved to another part of the world where it was still cheap.  Capital got cheaper!  Only raw materials were getting more expensive.

And here we are in 2008, with the number of places in the world where labor is significantly cheaper than it is in the United States shrinking by the day, with capital getting more expensive thanks to inflationary monetary policy, and with raw materials more expensive than ever thanks to an oil shortage.

Time to rethink our theory of accounting folks.  It’s not how big or how fast you are anymore.  It’s your miles per gallon that matter.

Written by Michael F. Martin

June 30, 2008 at 1:25 pm

Milton Friedman the Liberal

leave a comment »

There has been some academic controversy over the legacy of Milton Friedman of late.  The University of Chicago has announced the opening of a new institute, and some left-leaning faculty are concerned that this will upset the ideological equipoise that the University of Chicago has sustained for so long.  It’s an important thing to be worried about.  But it’s too bad that these folks think of Friedman as "right-leaning" or "conservative."

Here’s what Gary Becker said of his former teacher’s public policy agenda shortly after his death in 2006:

I will discuss instead several ideas in his remarkable book, Capitalism and Freedom,
published in 1962, that contains almost all his well-known proposals on
how to improve public policy in different fields. These proposals on
based on just two fundamental principles. The first is that in the vast
majority of situations, individuals know their own interests and what
is good for them much better than government officials and
intellectuals do. The second is that competition among providers of
goods and services, including among producers of ideas and seekers of
political office, is the most effective way to serve the interests of
individuals and families, especially of the poorer members of society.

Are these two fundamental principles conservative or liberal?  I don’t think so.  The reason that Milton Friedman is perceived to be a conservative is because he lived and worked through the post-World War II era in which everybody in power believed that central planning, especially through government, was the answer to social problems.  Anybody who has actually read Milton Friedman knows that he understood that there was a legitimate role for government.  Chapter 2 of Capitalism and Freedom is entitled "The Role of Government in a Free Society."  For Milton Friedman, freedom and government were not mutually exclusive!

We live in a much different era.  Thanks to the Internet and globalization, there is no longer a debate about whether centrally planned or decentralized institutions are the better way to organize people.  Decentralized institutions are, in fact, the only way to organize people on a global scale, over a long period of time.  I believe that if Milton Friedman were alive today, he would be excited about Creative Capitalism, and interested in the idea of rethinking the way we allocate and reallocate scarce resources through property rights (like patents) and contracts (like corporate charters).  On my view, he would have been against any reductionist view of the purpose of private institutions.

In this regard then, Milton Friedman was not a conservative, but a liberal.  Here’s what he himself had to say about his liberalism:

Beginning in the late nineteenth century, and especially after 1930 in
the United States, the term liberalism came to be associated with a
very different emphasis, particularly in economic policy. It came to be
associated with a readiness to rely primarily on the state rather than
on private voluntary arrangements to achieve objectives regarded as
desirable. The catchwords became welfare and equality rather than
freedom. The nineteenth century liberal regarded an extension of
freedom as the most effective way to promote welfare and equality; the
twentieth century liberal regards welfare and equality as either
prerequisites of or alternatives to freedom. In the name of welfare and
equality, the twentieth-century liberal has come to favor a revival of
the very policies of state intervention and paternalism against which
classical liberalism fought. In the very act of turning the clock back
to seventeenth-century mercantilism, he is fond of castigating true
liberals as reactionary!

If Creative Capitalism meant more freedom, Milton Friedman would have been for it.

Written by Michael F. Martin

June 29, 2008 at 2:52 pm

Another Waddell & Bodek Quote

leave a comment »

See also the first quote describing how Ford implemented a prediction market at its Highland Park plant.

The philosophical underpinnings of much of this can be found in Sloan’s "M Form" concept.  One of the key ideas was to break General Motors into divisions that would operate "independently" — decentralized, according to Sloan — but still reporting to and controlled by the corporate headquarters.  Because it was really one company, this decentralization often required the divisions to "buy" and "sell" from each other.  Fisher Body, for example, sold car bodies to the Chevrolet Division and so forth.  This notion of a company buying and selling from itself flowed down to the very details of the company’s cost accounting scheme… The idea is that somehow, even though nobody sold anything to anybody, machining made a profit on the deal, and that profit can be used to measure the return on the company’s investment in the machining department.

From Rebirth of American Industry, page 67.

Here’s an interesting SSRN paper discussing a government-sponsored review of M Form theory in the Netherlands.

Reporting credit and debit flows for each balance sheet account would make it easier to keep track of how capital expenditures were either helping or hurting the ultimate goal of selling higher quality products to customers.

Written by Michael F. Martin

June 29, 2008 at 12:41 pm

Creative Capitalism = Sustainable Capitalism

leave a comment »

I’ve been enjoying the commentary posted at the new Creative Capitalism blog.  I love it when people at the very top of their game get together and talk in open-ended terms about how things could be improved.  Being able to listen in — and even participate through comments — is a totally earthshaking technological change.

I’m on Peter Drucker’s side of the discussion about creative capitalism, and whether it has any instrumental value to investors and managers of large institutions.  Here’s what Mr. Drucker wrote in Management: Tasks, Responsibilities, Practices:

In modern society there is no other leadership group but managers. If
the managers of our major institutions, and especially of business, do
not take responsibility for the common good, no one else can or will.

Drucker knew that the managers of private institutions would have to take more responsibility for the common good because he understood the limits of centrally planned institutional design.  Decentralized control through sponatenous ordering is the only sustainable form of institution over long time periods and large geographical areas.  The administrative agencies of the Executive branch of federal government in the United States are an example of how central planning cannot keep up with markets (just look at how well the Federal Reserve and SEC have kept up with the markets).

But how can we improve on the model of management and investing followed in the United States?  We have to go back to the fundamentals, and rearticulate why it is that we’re measuring and reporting the things we do.  In particular, I believe we need to make some important adjustments to how we report things on financial statements.  If we were all measuring growth in the same way, then cooperation among the various stakeholders in corporations would be much easier.  And competition among various corporations would be more efficient.

So here’s my suggestion:

For every balance sheet account, let’s include a measure of the average debit flow and the average credit flow in/out of the various accounts on a daily basis over the time period of the financial statement.

Thoughtful managers and investors may recognize how this additional measure of liquidity would help us to distinguish between a healthy corporation that is on track to grow sustainably during the next period, and a sick corporation that is being unwound slowly to generate as much cash as possible before dissolving.

Consider who within our economy is now best trained at measuring and reporting the average frequency of different economic events.  With that answer in mind, is it any wonder that some of the best investors in the world generate new liquidity through insurance float?

Here are a few links to other posts I’ve made about how we could do accounting and management differently:

Written by Michael F. Martin

June 27, 2008 at 10:46 am

The Ensemble of Parametric Oscillators Model of the Economy

leave a comment »

Markets can be modeled as ensembles of parametric oscillators.  The parametric oscillator model is the simplest model that is useful in understanding dynamic market prices.  For non-physicist readers, you’ve made a parametric oscillator whenever you’ve pumped your legs on a swing to change your frequency of oscillation.  If you’ve ever had somebody push you, then you’ve made an amplified parametric oscillator, which is equivalent to a market hooked up to a time-varying external money supply.

Supply can be modeled as an ensemble of oscillators, one for each person.  The cumulative frequency distribution of the supply ensemble is equivalent to the aggregate supply available to a market within a window of time.  Demand can be modeled as an ensemble of oscillators, one for each person.  The cumulative frequency distribution of the demand ensemble is equivalent to the aggregate demand available to a market within a window of time.  See here.  Elasticity is a function of the fatness of the frequency distributions at the half-maximum to their peaks.  The distributions will be poissonian in shape.

Both cumulative distribution functions can be parametrized in terms of the opportunity cost of any scarce resource within an economy, or in terms of a currency that does not vary fast with respect to other currencies within the size of the time window.  (Doesn’t that explain why we use currency rather than bartering?)

The "temperature" of these ensembles (i.e., the shape of the distribution for a given amount of capital when scarcity and size of the ensemble are fixed) will be a function of the capital available.  Similarly, other changes in the cumulative frequency distributions of supply and demand will be a function of capital (energy), scarcity (volume), and the size of the ensemble (pressure).  If the changes are made slowly with respect to the time windows within which the distributions are measured, then convexities in the function of frequency with respect to increasing capital, decreasing scarcity, and increasing ensemble size may be observed.  Certain ranges of capital, scarcity, and size of the population will be characterized by certain types of structures.  In other words, as capital, scarcity, and size of population are tuned through different ranges, spontaneously ordered structures for the allocation of capital and resources throughout the ensembles will emerge.  Thus, the parametric oscillator model is consistent with a thermodynamics of institutional design.

Thermodynamics gives us no insight into how and when change will occur.  But the parametric amplifier model also permits an insight into market dynamics.  According to this model, the ensemble of supply oscillators  couples nonlinearly to the ensemble of demand oscillators.  Mathematically, the mechanism for coupling is analogous to a damping force on each ensemble that is, in part, a function of the frequency distribution for the other ensemble.  In other words, the oscillations of the two ensembles don’t simply add or subtract from one-another.  They can multiply or divide one-another.

In practice, the coupling mechanism might be provided by anything that causes the frequencies of the ensembles to multiply rather than add, such as transactions costs or liquidity constraints that do not vary linearly with the quantity of goods exchanged.  Study of models of the coupling mechanism will be one of the most fruitful areas of research for econometricians.  For the coupling mechanism is not simply a function of the frequency of the supply and demand ensembles of the market in question.  Rather, it is a function of the frequency distribution for any supply or demand ensemble with non-trivial cross-elasticity with the supply and demand ensembles for the market in question.  The coupling mechanism, including the phenomenon of cross-elasticity, is the dynamic mechanism that describes how and when phase transitions will occur.

Note that variations in external money supply would be a source of capital to the supply or demand ensembles that should be considered separate from the coupling mechanism.  Thus, an increase in external money supply might give rise to parametric amplification.  Variations in external money supply add many complications to understanding the dynamics of parametric oscillators.  Having a Taylor rule that describes how the external money supply varies in time makes the model easier to solve.

Parametric oscillators exhibit many interesting dynamics.  One is the phenomenon of parametric resonance, whereby the ensembles may become synchronized in phase.  Phase synchronization is an implicit or explicit characteristic observable in all markets.  Another is the phenomenon of parametric instability.  Price bubbles can form when the resonance peak (or peaks) are too high-frequency to be sustainable.

For the Hayekians out there, given constant resources and population, as capital is removed from the
system, spontaneous symmetry breaking will result in new spontaneous
ordering of capital, resources, and population within the market.  In other words, holding two out of three of capital, resources, or population fixed, and minimizing the other variable will lead to more spontaneous order within society.

As an end note, the wave equation necessary to the parametic oscillator model will not apply over longer time scales.  Wave equations are second-order in time.  For very large time windows, dissipative forces will have more noticeable effects, and a heat equation (like the Schrodinger equation) will provide a better approximation of dynamics.  The difference in observable dynamics at different time-scales is part of why microeconomics and macroeconomics are not readily joined in econometric theory.

Written by Michael F. Martin

June 26, 2008 at 12:34 pm

Why don’t VC funds employ in-house lawyers to do work for portfolio companies?

with 2 comments

A while back, Jason Mendelson wrote a great post about his frustration with "startup lawyers," by which he means outside counsel who work on transactions relating to portfolio companies.  Like many consumers of legal services, Jason is frustrated with a trend toward lower quality and higher cost.

I don’t want to get into the discussion of what trends have emerged and why here.  Rather, I want to ask the question of why more VC funds haven’t hired outside counsel to come work in-house doing work for portfolio companies.

Benefits:

  • Lower fees from outside counsel, probably more than offsetting the salary cost
  • Better quality services from outside counsel (higher bandwidth communication, especially when it’s with a former colleague)
  • Less worry about the billable hour clock by the in-house lawyer, which means more efficient allocation of time to various legal problems
  • Lower cost to lawyer spending time on-site with portfolio companies to proactively avoid legal problems
  • Lawyer also available for consulting on fund-related issues when not needed at portfolio companies

Disadvantages

  • Requires outlay of cash-flow from management fees to cover salary
  • Agency costs that could result from soft kickbacks between in-house lawyer and outside counsel (relatively easy to reduce with monitoring)
  • Misalignment of incentives between lawyer and portfolio companies because salary is paid by VC fund (but this one is shared with the VCs!)

Note that nobody could eliminate outside counsel because of the superior access to information about new case law, unusual market scenarios, the database of internal legal documents built up over years for use in a variety of client matters, lower cost ability to generate stock ledgers, etc.

Often, VCs invest millions of dollars into a new round of financing, only to demand that the startup company turnaround and pay outside counsel for the tens of thousands of dollars in costs of advising on the transaction.  I understand why this is done from an accounting perspective.  But it is not so cost efficient.

Written by Michael F. Martin

June 26, 2008 at 8:35 am

Bounded Rationality or Broken Symmetry? Revisiting Schelling.

with 6 comments

Symmetries as Schelling Points

Game theorists know that games with symmetry in payoffs are easier to solve.  Appendix B of The Strategy of Conflict by Thomas Schelling is titled "For the Abandonment of Symmetry in Game Theory."  In this appendix, Schelling argues that

though symmetry is consistent with the rationality of the players, it cannot be demonstrated that asymmetry is inconsistent with their rationality (page 278)

In other words, symmetry is not a necessary condition to rationality.  But as Schelling documents in Appendix B, this truth was not clear to all game theorists.  In particular, Harsanyi referred to the symmetry axiom as the "fundamental postulate," and said that "the assumption underlying the axiom is that a rational bargainer will not expect a rational opponent to grant him larger concessions than he would make himself under similar conditions."

Schelling goes on to ask whether the symmetry postulate can be derived from rationality.  He concludes that it cannot.  Thus, he observes that

we must be careful not to make symmetry part of the definition of rationality; to do so would destroy the empirical relevance of the theory and simply make symmetry an independent axiom.

In other words, we need more than the assumption of rationality to predict the outcome of any game.  But what, aside from the assumption of rationality, could be as fundamental?  Schelling does not give a direct answer.  Rather, he observes that

a theory of strategy … is inherently empirical; it depends on how people coordinate their expectations.

From here he goes on to make his famous hypothesis that rational people will coordinate their behavior through focal points — i.e., the most obvious answer known by prior experience to both people.  The theory of focal points resolves the paradox of symmetry and rationality by assuming that the Nash equilibrium will be the focal point for rational actors.  In other words, where obvious symmetries exist in payoffs, rational actors will tend to focus on those symmetries.

How Schelling Points Led to an Attack on the Rational Hypothesis

The explanatory value of focal points in understanding conflict and cooperation has led to advances in many fields of economics and politics.  Schelling later won a Nobel. 

Inevitably, the theory of focal points has pushed many economists and political scientists to undertake a more serious study of psychology.  If theories of strategy are inherently empirical afterall, then cognitive and behavioral limits must be important in determining focal points.

But somewhere along the way, some of the academics studying behavior forgot that the end goal was to observe and hypothesize focal points.  Their focus instead fell upon the hypothesis of rationality.  This isn’t surprising given that the observed behavior of individual humans often seems to contradict the hypothesis of rationality.  The hypothesis of rationality is not a psychology; rather, it is a prediction about how, on average, large groups of people will behave over long periods of time.  It is thus an empirical hypothesis, not about individuals, but about groups — very large groups, in fact.

Yet the attack on the hypothesis of rationality has been distracting for two reasons.  First, and less importantly, in many cases the attack on the rational hypothesis has led to a misunderstanding about what, exactly, the hypothesis of rationality hypothesizes.  I do not mean to impugn all research on the subject.  Much of it is quite interesting and useful.  But it has been misleading to non-specialists.

Second, and more importantly, the attack has distracted theorists and experimentalists from the task that Schelling gave them of identifying empirically useful focal points.  In fact, theorists of econometrics have gotten so distracted that some have given up entirely on identifying focal points that could be used to model collective behavior as complicated as market prices.  Their approach instead has been to substitute ad hoc models of psychology to limit the hypothesis of rationality.  This, in turn, has led to intractable mathematics and insoluble debates over what approximations are reasonable as limits to rationality.

Broken Symmetries as Schelling Points

Theories of economics and political science are starved for a new focal point theory that is useful and general as an empirical observation and consistent with the hypothesis of rationality.  There is such a focal point.  And what should not be surprising is that it was overlooked for the same reason that Schelling points were overlooked by other academics before The Strategy of Conflict.  Specifically, academics had reached a tacit agreement that symmetry would be the basis for theories of rationality.  Schelling’s ability to see that symmetry was only one focal point among many that could have been chosen is the reason why he is justly lauded for his deep insights into human social behavior.

And since Schelling understood how Schelling points could lead to tacit agreement, naturally he was able to identify symmetry as a Schelling point for academic economists, and ask, in effect, "Why symmetry?"  The answer he suggests is that symmetric problems are easier to solve mathematically.

This is certainly true.  To wit, to be soluble, mathematical models for market equilibrium (and hence price) provided by economists must assume that wealth is conserved (i.e., is not created or destroyed).  Equilibrium cannot be defined when total wealth is unstable.  What only few economists may realize, however, is that (under Emmy Noether’s Theorem) conservation laws are mathematically equivalent to symmetry invariance.  In this case, wealth conservation implies time invariance.

Given the empirical fact that symmetries are often broken, it is natural to ask, then, what hypothesis about the social behavior of large groups over long periods could be nearly as fundamental as the hypothesis of rationality — remembering, of course, that the rules of the game are to find a hypothesis that is also consistent with the hypothesis of rationality.

My humble suggestion is the following:

The activities of people change in time according
to rhythms that, averaged over a population within a period of time, have a
characteristic distribution in frequency.

I call this the fundamental hypothesis of periodicity.  The hypothesis can be demonstrated to be consistent with the fundamental hypothesis of rationality by observing how supply and demand curves can be integrated up from underlying frequency distributions.  Nonetheless it provides new insights into how elasticity and price dynamics arise from the aggregation of rhythms in the behavior of large numbers of people over long periods of time.

Periodicity is a focal point that is unexplored by economists, political scientists, and others.  It could serve as the foundation for a more complete empirical theory of the firm.

Written by Michael F. Martin

June 25, 2008 at 6:11 pm

Posted in Periodicity

Inventors are Customers

with one comment

Have you ever wondered why startups are better at innovating than larger corporations?

Some people say that it’s because of the culture, which promotes teamwork.  Others say it’s the focus on a constrained set of resources.  Both are true to some extent.  But I don’t see either as the root cause.

Startups are often founded by people who are frustrated because their own needs have not been met by existing products or services.  In other words, unmet needs are the source of inspiration to inventors.

Inventors are simply frustrated customers.

And if you think about it, this might explain why so many venture capital funds, investment banks, and large corporations have failed at promoting innovation.  Innovation needs to be driven by customer needs.

Do venture capitalists know customer needs?  Some do.  But their incentives are driven by large corporations (on the M&A side) and investment banks (on the IPO side).

Do investment banks know customer needs?  Some do.  But their incentives are driven by whatever will sell at a given moment.

Do large corporations know customer needs? Some do.  But their incentives to do M&A may also be driven by (misguided) attempts to increase ROI regardless of whether that will help them sell more at lower cost.

From whence in the cycle of innovation does the knowledge of customer needs arise?  It’s at the point in the cycle where the beginning and the end meet.  Customers are the end, inventors the beginning. 

Inventors are customers.

Written by Michael F. Martin

June 24, 2008 at 1:09 pm