“You keep using that word.  I don’t think it means what you think it means.” — Project meeting blues (technical)

I could have entitled the lede, instead, with a quote from Upton Sinclair.  I don’t usually take colleagues to task in my posts, but this week is an exception only because of the impact of bad conclusions stemming from invalid assumptions.

Attending another project management meeting and heard some troubling verbal arguments on the use of data that provides a great deal of fidelity in documenting and projecting project performance. 

First some background.

I’ve written in previous posts about the use of schemas for well defined datasets (as opposed to unstructured data that defy–for the moment–relational DBMS).  I’ve also written about the economies in data streams and how, in today’s world, the cost of more/better (mo’better) data is marginal, defined by the cost associated with the consumption of electricity in producing such data.

Also note that I am involved in actually leveraging these concepts to the advantage of both government and A&D customers.  My policy is to separate that work from the educational purpose of this blog, plus I observe confidentiality of customer operations though there is a marketing disadvantage as a result, and so these specifics are always avoided.  

But needless to say I am involved in actually doing what others, who are not doing these things, assert is impossible or undesirable or can only be done at great cost.  It’s inconceivable!  

I was reminded yesterday by another colleague that my strict application of confidentiality has contributed to this condition.  But my take is that when the customer is ready to reveal the results, the economics will win out.  Note that many of these things are ideas of my customers that my knowledge of technology has brought to fruition, so my blog posts document discoveries sometimes as they happen.

Here is a paraphrase of what was said (with my annotations):  “I came up using (obsolete software app).”  (Okay, you just dated yourself).  “Providing cumulative-to-date project data is preferable in lieu of monthly period data and I voiced that to (the governing agency).”  (So you prefer a method of calculating period data, which provides much less accuracy and fidelity in reporting.). “Any errors in the period will eventually work itself out.”  (It is almost impossible to take this assertion as being supportive to the argument and undermines the seriousness of the point.)

Now note that the foundation of the argument is based on the speaker’s preference for an obsolete software app, and probably any clones built on that same app structure, not on the efficiencies dictated by the core data in project management systems of record.

This is why I have advocated and written about the need to develop models for project and data management.  When compared to the economic models that do exist, these verbal assertions, which have sounded reasonable in the past, fail to pass muster.

Technical Foul — It’s Time for TPI in EVM

For more than 40 years the discipline of earned value management (EVM) has gone through a number of changes in its descriptions, governance, and procedures.  During that same time its community has been resistant to improvements in its methodology or to changes that extend its value when taking into account other methods that either augment its usefulness, or that potentially provide more utility in the area of performance management.  This has been especially the case where it is suggested that EVM is just one of many methodologies that contribute to this assessment under a more holistic approach.

Instead, it has been asserted that EVM is the basis for integrated project management.  (I disagree–and solely on the evidence that if it was so, then project managers would more fully participate in its organizations and conferences.  This would then pose the problem that PMs might then propose changes to EVM that, well…default to the second sentence in this post).  As evidence it need only be mentioned that there has been resistance to such recent developments in using earned schedule, technical performance, and risk–most especially risk based on Bayesian analysis).

Some of this resistance is understandable.  First, it took quite a long time just to get to a consensus on the application of EVM, though its principles and methods are based on simple and well proven statistical methods.  Second, the industries in which EVM has been accepted are sensitive to risk, and so a bureaucracy of practitioners have grown to ensure both consensus and compliance to accepted methods.  Third, the community that makes up practitioners of EVM consist mostly of cost analysts, trained in simple accounting, arithmetic, and statistical methodology.  It is thus a normal human bias to assume that the path of one’s previous success is the way to future success, though our understanding of the design space (reality) that we inhabit has been enhanced through new knowledge.  Fourth, there is a lot of data that applies to project management, and the EVM community is only now learning of the ways that this other data impacts our understanding of measuring project performance and the probability of reaching project goals in rolling out a product.  Finally, there is the less defensible reason that a lot of people and firms have built their careers that depends on maintaining the status quo.

Our ability to integrate disparate datasets is accelerating on a yearly basis thanks to digital technology, and the day in achieving integration of all relevant factors in project and enterprise performance is inevitable.  To be frank, I am personally engaged in such projects and am assisting organizations in moving in this direction today.  Regardless, we can make an advance in the discipline of performance management by pulling down low hanging fruit.  The most reachable one, in my opinion, is technical performance measurement.

The literature of technical performance has come quite a long way, thanks largely to the work of the Institute for Defense Analyses (IDA) and others, particularly the National Defense Industrial Association through the publication of their predictive measures guide.  This has been a topic of interest to me since its study was part of my duties back when I was still wearing a uniform.  The early results of these studies resulted in a paper that proposed a method of integrating technical performance, earned value, and risk.  A pretty comprehensive overview of the literature and guidance for technical performance can be found at this presentation by Glen Alleman and Tom Coonce given at EVM World in 2015.  It must be mentioned that Rick Price of Lockheed Martin also contributed greatly to this literature.

Keep in mind what is meant when we decide to assess technical performance within the context of R&D.  It is an assessment against expected or specified:

a.  Measures of Effectiveness (MoE)

b.  Measures of Performance (MoP), and

c.  Key Performance Parameters (KPP)

The opposition from the project management community to widespread application of this methodology took two forms.  First, it was argued, the method used to adjust the value of earned (CPI) seemed always to have a negative impact.  Second, there are technical performance factors that transcend the WBS, and so it is hard to properly adjust the individual control accounts based on the contribution of technical performance.  Third, some performance measures defy an assessment of value in a time-phased manner.  The most common example has been tracking weight of aircraft, which has contributors from virtually all components that go into it.

Let’s take these in order.  But lest one think that this perspective is an artifact from 1997, just a short while ago, in the A&D community, the EVM policy office at DoD attempted to apply a somewhat modest proposal of ensuring that technical performance was included as an element in EVM reporting.  Note that the EIA 748 standard states this clearly and has done so for quite some time.  Regardless, the same three core objections were raised in comments from the industry.  Thus, this caused me to ask some further in-depth questions and my revised perspective follows below.

The first condition occurred, in many cases, due to optimism bias in registering earned value, which often occurs when using a single point estimate of percent complete by a limited population of experts contributing to an assessment of the element.  Fair enough, but as you can imagine, its not a message that a PM wants to hear or will necessarily accept or admit, regardless of the merits.  There are more than enough pathways to second guessing and testing selection bias at other levels of reporting.  Glen Alleman in his Herding Cats blog post of 12 August has a very good post listing the systemic reasons for program failure.

Another factor is that the initial methodology did possess a skewing toward more pessimistic results.  This was not entirely apparent at the time because the statistical methods applied did not make that clear.  But, to critique that first proposal, which was the result of contributions from IDA and other systems engineering technical experts, the 10-50-90 method in assessing probability along the bandwidth of the technical performance baseline was too inflexible.  The graphic that we proposed is as follows and one can see that, while it was “good enough”, if rolled up there could be some bias that required adjustment.

TPM Graphic

 

Note that this range around 50% can be interpreted to be equivalent to the bandwidth found in the presentation given by Alleman and Coonce (as well as the Predictive Measures Guide), though the intent here was to perform an assessment based on a simplified means of handicapping the handicappers–or more accurately, performing a probabilistic assessment on expert opinion.  The method of performing Bayesian analysis to achieve this had not yet matured for such applications, and so we proposed a method that would provide a simple method that our practitioners could understand that still met the criteria of being a valid approach.  The reason for the difference in the graphic resides in the fact that the original assessment did not view this time-phasing as a continuous process, but rather an assessment at critical points along the technical baseline.

From a practical perspective, however, the banding proposed by Alleman and Coonce take into account the noise that will be experienced during the life cycle of development, and so solves the slight skewing toward pessimism.  We’ll leave aside for the moment how we determine the bands and, thus, acceptable noise as we track along our technical baseline.

The second objection is valid only so far as any alignment of work-related indicators vary from project to project.  For example, some legs of the WBS tree go down nine levels and others go down five levels, based on the complexity of the work and the organizational breakdown structure (OBS).  Thus where we peg within each leg of the tree the control account (CA) and work package (WP) level becomes relative.  Do the schedule activities have a one-to-one relationship or many-to-one relationship with the WP level in all legs?  Or is the lowest level that the alignment can be made in certain legs at the CA level?

Given that planning begins with the contract spec and (ideally) proceed from IMP –> IMS –> WBS –> PMB in a continuity, then we will be able to determine the contributions of TPM to each WBS element at their appropriate level.

This then leads us to another objection, which is that not all organizations bother with developing an IMP.  That is a topic for another day, but whether such an artifact is created formally or not, one must achieve in practice the purpose of the IMP in order to get from contract spec to IMS under a sufficiently complex effort to warrant CPM scheduling and EVM.

The third objection is really a child of the second objection.  There very well may be TPMs, such as weight, with so many contributors that distributing the impact would both dilute the visibility of the TPM and present a level of arbitrariness in distribution that would render its tracking useless.  (Note that I am not saying that the impact cannot be distributed because, given modern software applications, this can easily be done in an automated fashion after configuration.  My concern is in regard to visibility on a TPM that could render the program a failure).  In these cases, as with other indicators that must be tracked, there will be high level programmatic or contract level TPMs.

So where do we go from here?  Alleman and Coonce suggest adjusting the formula for BCWP, where P is informed by technical risk.  The predictive measures guide takes a similar approach and emphasizes the systems engineering (SE) domain in getting to an assessment to determine the impact of reported EVM element performance.  The recommendation of the 1997 project that I headed in assignments across Navy and OSD, was to inform performance based on a risk assessment of probable achievement at each discrete performance milestone.  What all of these studies have in common, and in common with common industry practice using SE principles, is an intermediate assessment, informed by risk, of a technical performance index against a technical performance baseline.

So let’s explore this part of the equation more fully.

Given that we have MoE, MoP, and KPP are identified for the project, different methods of determining progress apply.  This can be a very simplistic set of TPMs that, through the acquisition or fabrication of compliant materials, meet contractual requirements.  These are contract level TPMs.  Depending on contract type, achievement of these KPPs may result in either financial penalties or financial reward.  Then there are the R&D-dependent MoEs, MoPs, and KPPs that require more discrete time-phasing and ties to the physical completion of work documented by through the WBS structure.  As with EVM on the measurement of the value of work, our index of physical technical achievement can be determined through various methods: current EVM methods, simulated Monte Carlo technical risk, 10-50-90 risk assessment, Bayesian analysis, etc.  All of these methods are designed to militate against selection bias and the inherent limitations of limited sample size and, hence, extreme subjectivity.  Still, expert opinion is a valid method of assessment and (in cases where it works) better than a WAG or coin flip.

Taken together these TPMs can be used to determine the technical achievement of the project or program over time, with a financial assessment of the future work needed to bring it in line.  These elements can be weighted, as suggested by Coonce, Alleman, and Price, through an assessment of relative risk to project success.  Some of these TPIs will apply to particular WBS elements at various levels (since their efforts are tied to specific activities and schedules via the IMS), and the most important project and program-level TPMs are reflected at that level.

What about double counting?  A comparison of the aggregate TPIs and the aggregate CPI and SPI will determine the fidelity of the WBS to technical achievement.  Furthermore, a proper baseline review will ensure that double counting doesn’t occur.  If the element can be accounted for within the reported EVM elements, then it need not be tracked separately by a TPI.  Only those TPMs that cannot be distributed or that represent such overarching risk to project success need be tracked separately, with an overall project assessment made against MR or any reprogramming budget available that can bring the project back into spec.

My last post on project management concerned the practices at what was called Google X.  There incentives are given to teams that identify an unacceptably high level of technical risk that will fail to pay off within the anticipated planning horizon.  If the A&D and DoD community is to become more nimble in R&D, it needs the necessary tools to apply such long established concepts such as Cost-As-An-Independent-Variable (CAIV), and Agile methods (without falling into the bottomless pit of unsupported assertions by the cult such as elimination of estimating and performance tracking).

Even with EVM, the project and program management community needs a feel for where their major programmatic efforts are in terms of delivery and deployment, in looking at the entire logistics and life cycle system.  The TPI can be the logic check of whether to push ahead, or finishing the low risk items that are remaining in R&D to move to first item delivery, or to take the lessons learned from the effort, terminate the project, and incorporate those elements into the next generation project or related components or systems.  This aligns with the concept of project alignment with framing assumptions as an early indicator of continued project investment at the corporate level.

No doubt, existing information systems, many built using 1990s technology and limited to line-and-staff functionality, do not provide the ability to do this today.  Of course, these same systems do not take into account a whole plethora of essential information regarding contract and financial management: from the tracking of CLINs/SLINs, to work authorization and change order processing, to the flow of funding from TAB to PMB/MR and from PMB to CA/UB/PP, contract incentive threshold planning, and the list can go on.  What this argues for is innovation and rewarding those technology solutions that take a more holistic approach to project management within its domain as a subset of program, contract, and corporate management–and such solutions that do so without some esoteric promise of results at some point in the future after millions of dollars of consulting, design, and coding.  The first company or organization that does this will reap the rewards of doing so.

Furthermore, visibility equals action.  Diluting essential TPMs within an overarching set of performance metrics may have the effect of hiding them and failing to properly identify, classify, and handle risk.  Including TPI as an element at the appropriate level will provide necessary visibility to get to the meat of those elements that directly impact programmatic framing assumptions.

Saturday Music Interlude — Margo Price: A Midwest Farmer’s Daughter

Margo Price is a country music sensation, there is just no getting around it, but she has come to it the hard way.

Hailing from Aledo, Illinois, her Allmusic bio states that she dropped out of college at the age of 20 in 2003 and moved to Nashville to pursue her musical dreams.  She formed the band Buffalo Clover with bassist husband Jeremy Ivey in 2010, which released three albums until the breakup of the band in 2013.  Personal tragedy then intervened with the death of her firstborn son to a heart ailment.  After that unfathomable heartbreak her website bio confesses that she fell into a deep depression that involved alcohol abuse and a brush with her darker side that pitted her against the law.  Coming through that period with the help of family and friends led her to the conclusion that she was “going to write music that I want to hear.  It was a big turning point.”

Pain, heartbreak, tragedy, hardscrabble experience all lay the foundation for great art.  It is a great artist who can channel the energy from that passion and pain into their art without spinning out of control or falling into self-pity.  Margo Price is a great artist with an amazing instrument of a voice and it is great art that is achieved with her solo album entitled Midwest Farmer’s Daughter.

The first song from the album is entitled “Hands of Time” and here she is performing it at SXSW thanks to NPR Music Front Row:

My first impression of the video is that she looks and sounds for all the world much like the reincarnation of a young Lesley Gore.  One could make references to the obvious influence of Loretta Lynn, informed by the modernist attitude of a Kasey Musgraves.  But I say this with a great deal of self-doubt, because the music for this album is so special and so singular, that is sounds both familiar and new.  Margo Price has created her own tradition and it will be interesting to see where she goes from here.  For the fact of the matter is that her songs could be sung by either a man or a woman, and that’s what makes them special.  Rather than speaking from a overtly female perspective, as much of female country music has done in the past, Ms. Price speaks from the heart of some great consciousness that speaks to feelings and experiences that we all understand, with which we can empathize, and which we feel in our own psyches.

For something a bit more energetic, here she is performing “Tennessee Song”, also from SWSW 2016 and NPR.

 

Finally, here she is on CBS This Morning from March 26, 2016 performing “Since You Put Me Down” where she channels the spirit of Hank Williams Sr. and other country music pioneers.

 

 

 

New York Times Says Research and Development Is Hard…but maybe not

At least that is what a reader is led to believe by reading this article that appeared over the weekend.  For those of you who didn’t catch it, Alphabet, which formerly had an R&D shop under the old Google moniker known as Google X, does pure R&D.  According to the reporter, one Conor Doughtery, the problem, you see, is that R&D doesn’t always translate into a direct short-term profit.  He then makes this absurd statement:  “Building a research division is an old and often unsuccessful concept.”  He knows this because some professor at Arizona State University–that world-leading hotbed of innovation and high tech–told him so.  (Yes, there is sarcasm in that sentence).

Had Mr. Doughtery understood new technology, he would know that all technology companies are, at core, research organizations that sometimes make money in the form of net profits, just as someone once accurately described to me that Tesla is a battery company that also makes cars (and lately its showing).  But let’s return the howler of a statement about research divisions being unsuccessful, apply some, you know, facts and empiricist thought, and go from there.

The most obvious example of a research division is Bell Labs.  From the article one would think that Bell Labs is a dinosaur of the past, but no, it still exists as Nokia Bell Labs.  Bell Labs was created in 1925, but has its antecedents in both Western Electric and AT&T, but its true roots go back to 1880 when Alexander Graham Bell, after being awarded the Volta prize for the invention of the telephone, opened Volta Labs in Washington, D.C.  But it was in the 1920s that Bell Labs, “the Idea Factory” really hit its stride.  Its researchers improved telephone switching, sound transmission, and invented radio astronomy, the transistor, the laser, information theory (of which I’ve written about extensively and which directly impacts on computing and software), Unix, the languages C, C++.  Bell established the precedent that researchers kept and were compensated for use of their inventions and IP.  This goes well beyond the assertion in the article that Bell Labs largely made “contributions to basic, university-style research.”  I guess New York Times reporters, fact checkers, and editors don’t have access to the Google search engine or Wikipedia.

Between 1937 and 2014 seventeen of their researchers have been awarded the Nobel Prize or Turing Award.  Even those who never garnered an award like Claude Shannon, of the aforementioned information theory, is among a Who’s Who of researchers into high tech.  What they didn’t invent directly they augmented and facilitated to practical use, with a good deal of their input going into public R&D through consulting and other contracts with the Department of Defense and federal government.

The reason why Bell Labs didn’t continue as a research division of AT&T wasn’t due to some dictate of the market or investor dissatisfaction.  On the contrary, AT&T (Ma Bell) dominated its market, and Bell Labs ensured that it stayed far ahead of any possible entry.  This is why in 1984 the U.S. Justice Department reached a divestiture agreement for AT&T under antitrust laws to split off Bell Labs from its local carriers in order to promote competition.  Whether the divestiture agreement was a good deal for the American people and had positive economic effects is still a cause for debate, but it is likely that the plethora of choices in cell phone and other technologies that have emerged since that time would not have gone to market without that antitrust action.

Since 1984, Bell Labs continued its significant contributions to the high tech industry through AT&T Technologies which was spun off in 1996 as Lucent Technologies, which is probably why Mr. Doughtery didn’t recognize it.  A merger with Alcaltel and then acquisition by Nokia has provided it with its current moniker.  Bell Labs over that period continued to innovate and has contributed significantly to pushing the boundaries of broadband speed and the use of imaging technology in the medical field.

So what this shows is that, while not every bit of R&D leads directly to profit, especially in the short term, a mix of types of R&D do yield practical results.  Anyone who has worked in project management understands that R&D, by definition, represents the handling of risk.  Furthermore, the lessons learned and spin offs are hard to estimate in advance, though they may result in practical technologies in the short and medium term.

When one reads past the lede and the “research division is an old and often unsuccessful concept” gaffe, among others, what you find is that Google specifically wants this portion of the research division to come up with a series of what it calls a “moon shots”.  In techie lingo this is often called a unicorn, and from personal experience I am part of a company that recently was characterized as delivering a unicorn.  This is simply a shorthand term for producing a solution that is practical, groundbreaking, and shifts the dialogue of what is possible.  (Note that I’m avoiding the tech hipster term “disruption”).

Another significant fact that we find out about Google X is the following:

X employees avoid talking about money, but it is not a subject they can ignore. They face financial barriers that can shut down a project if it does not pan out as quickly as planned. And they have to meet various milestones before they can hire more people for their teams.

This sounds a lot like project and risk management.  But Google X goes a bit further.

Failure bonuses are also an example of how X, which was set up independent of Google from the outset, is a leading indicator of sorts for how the autonomous Alphabet could work. In Alphabet, employees who do not work for Mother Google are supposed to have their financial futures tied to their own company instead of Google’s search ads. At X, that means killing things before they become too expensive.

Note that the incentive here, given in terms of a real financial incentive to the team members, is to manage risk.  No doubt, there are no #NoEstimates cultists at Google.  Psychologically, providing an incentive to find failure no doubt defeats Groupthink and optimism selection bias.  Much of this sounds, particularly in the expectation of non-existential failure, amazingly along the lines of an article recently published on AITS.org by yours truly.

The delayed profitability of software and technology companies is commonplace.  The reason for this is that, at least to my thinking, any technology type worth their salt will continue to push the technology once they have their first version marked to market.  If you’re resting on your laurels then you’re no longer in the software technology business, you’re in the retail business and might as well be selling candy bars or any other consumer product.  What you’re not doing is being engaged in providing a solution that is essential to the target domain.  Practically what this means is that, in garnering value, net profitability is not necessary the measure of success, especially in the first years.

For example, such market leaders such as Box, Workday, and Salesforce have gone years without a net profit, though revenues and market share are significant.  Facebook did not turn a profit for five yearsAmazon took six years, and even those figures were questionable.  The competing need for any executive running a company is between value (the intrinsic value of IP, existing customer base, and potential customer base), and profit.  The job of the CEO is not just to stockholders, yet the article in its lede clearly is biased in that way.  The fiduciary and legal responsibility of the CEO is to the customers, the employees, the entity, and the stockholders–and not necessarily in that order.  This is thus a natural conflict in balancing these competing interests.

Overall, if one ignores the contributions of the reporter, the case of Google X is a fascinating one for its expectations and handling or risk in R&D-focused project management.  It takes value where it can and cuts its losses through incentives to find risk that can’t be handled.  An investor that lives in the real world should find this reassuring.  Perhaps these lessons on incentives can be applied elsewhere.

 

Over at AITS.org — Failure is not Optional

My latest is at this link at AITS.org with the provocative title: “Failure is not Optional: Why Project Failure is OK.”  The theme and specifics of the post, however, are not that simple and I continue with a sidebar on Grant’s conduct of the Overland Campaign entitled “How Grant Leveraged Failure in the Civil War.”  A little elaboration is in place once you read the entire post.

I think what we deal with in project management are shades of failure.  It is important to understand this because we rely too often on projections of performance that oftentimes turn out to be unrealistic within the framing assumptions of project management.  In this context our definition of what defines success turns out to be fluid.

To provide a simplistic example of other games of failure, let’s take the game of American baseball.  A batter who hits safely more than 30% of the time is deemed to be skilled in the art of hitting a baseball.  A success.  Yet, when looked at it from a total perspective what this says is that 70% failure is acceptable.  A pitcher who gives up between 2 and 4 earned runs a game is considered to be skilled in the art of pitching.  Yet, this provides a range of acceptable failure under the goal of giving up zero runs.  Furthermore, if your team wins 9-4 you’re considered to be a winning pitcher.  If you lose 1-0 you are a losing pitcher, and there are numerous examples of talented pitchers who were considered skilled in their craft who had losing records because of lack of run production by his team.  Should the perception of success and failure be adjusted based on whether one pitched for the 1927 or 1936 or 1998 Yankees, or the 1963 Dodgers, or 1969 Mets?  The latter two examples were teams built on just enough offense to provide the winning advantage, with the majority of pressure placed on the pitching staff.  Would Tom Seaver be classified as less extraordinary in his skill if he averaged giving up half a run more?  Probably.

Thus, when we look at the universe of project management and see that the overwhelming majority of IT projects fail, or that the average R&D contract realizes a 20% overrun in cost and a significant slip in schedule, what are we measuring?  We are measuring risk in the context of games of failure.  We handle risk to absorb just enough failure and noise in our systems to both push the envelope on development without sacrificing the entire project effort.  To know the difference between transient and existential failure, between learning and wasted effort, and between intermediate progress and strategic position requires a skillset that is essential to ultimate achievement of the goal, whether it be deployment of a new state-of-the-art aircraft, or a game-changing software platform.  The noise must pass what I have called the “so-what?” test.

I have listed a set of skills necessary to the understanding these differences in the article that you may find useful.  I have also provided some ammunition for puncturing the cult of “being green.”

Sunday Music Interlude — Adia Victoria, SHEL, and onDeadWaves

I haven’t written about music in a while, so it’s time to catch up on some of the more interesting new acts and new projects that I’ve come across.

Originally out of South Carolina, Adia Victoria now calls Nashville home.  Her interesting bio can be found at Allmusic.com here.  Her original music is a combination of country and electric blues, punk, garage rock, and a modern type of dark Americana roots music borne of the narrative tradition and neo-folk.  Her voice consists of a girlish rasp wrapped in an alto silkiness.  You can learn more about her at her website at www.adiavictoria.com.

She was named WXPN’s Artist to Watch for July 2016, and just performed on NPR’s Tiny Desk Concert.  The performance from this last appears below.

 

SHEL is a group of four sisters out of Fort Collins, Colorado.  I wrote about them back in September 2014 as they were just out of the egg, featuring their neo-folk music after an EP and first album.  They have since matured and have come out with a critically hailed album entitled Just Crazy Enough.  They just played live on Echoes.org with John Diliberto.   Here they are performing a couple of selections that reveal both their developing maturity and natural talent informed by that maturity.  The first is “Let Me Do.”  The song begins as a deceptively simplistic song that then changes both tempo and melody, carried by the ethereal combined voice of their harmony vocals in the call and response from narrative to chorus.

Speaking of ethereal, here is SHEL performing “I’m Just a Shadow.”  This is first class neo-noir folk and roots music.  The following Lyric Video highlights the emotional power of the lyrics.

It is probably time for a shout-out to John Diliberto at Echoes.org.  I actually came across John’s taste in music through the program Star’s End, which is still on-going.  There I was introduced to ambient and space music in the 1970s when I split time between visits to my home state of New Jersey and during trips from my job in Washington, D.C.  FM radio waves being as they were, especially in the early morning over weekends, I would occasionally be able to tune into the program, which memory serves was out of Philly, while driving down some deserted highway with the star-streaked night sky above, and wish that the feeling of my movement through time and space, the fresh air from the open windows, the firmament of the night sky, and the music–which seemed to transport me to some other dimension–would never end.  Then, after years traveling and at sea, I was reintroduced to John as music critic through his contributions to the long-missed CD Review magazine.  His thoughtful, eloquent, and informative reviews opened my world to new music and new musical genre’s that I would probably not otherwise have explored.  There are a few critics that fall into this category which, for me, includes Ralph Gleason, Leonard Feather, Ira Gitler, John McDonough, Robert Christgau, Gary Giddins, Orrin Keepnews, Greil Marcus, Dave Marsh, Michael Cuscuna, and David Dye, among a few–all good company.

This serves as introduction to another project to which I was introduced through Echoes.org and Mr. Diliberto.  It is the group onDeadWaves.  The group consists of British singers Polly Scattergood and James Chapman.  Their maiden album is this month’s Echoes CD of the Month.  According to the review by John Diliberto, onDeadWaves’s sound is like “a meeting of Lanterna, driving across the desert in a 57 ‘Chevy, with Leonard Cohen and Lucinda Williams in the backseat.”  Their music, also called “shoegaze west”, seems more varied, especially when confronted by the 60’s Byrd’s-like guitar and unrestrained punk of the song “California.”  Overall, though, I can hear the influence of the moodier neo-noir song-styling of Lana Del Rey through most of the songs.  Perhaps Ms. Del Rey was onto something after all.

Here they are the song “Blue Inside”.  Other videos are also available at the Echoes site linked above.

 

My Little Town — Orlando, Florida

Pulse Orlando memorial

Photo courtesy of Donna Pisano

Conference, workshop, and vacation season had slowed blogging of late.  When it was over I had a number of things to post regarding interesting discussions and trends in the field of project management.  Then on my return to my adopted home town of Orlando, the singer Christina Grimmie was shot and killed after performing a concert at the local music venue The Plaza Live–a beautiful young woman senselessly struck down by a cypher of a man.  Then, early on Sunday my wife and I woke to the news of the mass shooting at Pulse nightclub.  Both venues are less than two miles from our home.

Writing about the more mundane issues of contract, technology, and earned value management just does not seem appropriate when the families of the 49 dead and 42 wounded are either mourning or anxiously hopeful so close by.

The impact on this community has been significant.  Orlando, as most cities, is a place of contradictions.  In the minds of tourists and those who come here for the amusement parks, it is all about Disney, Sea World, Universal, and all of the others.  This, however, is not Orlando.  Back when the parks were being planned and built they took advantage of the cheap land that was situated in the surrounding countryside of pine forest and played out orange groves.  Orlando happened to be the closest town of any significant size, and the local boosters were all too happy to accommodate, so Orlando became synonymous with the parks, which actually mostly lie near what used to be the hamlet of Kissimmee.

But Orlando and its people is more than that, but this is not a treacly homage.

My earliest encounter with Orlando came when I was a student at Stetson University, which is situated in DeLand, Florida, back in 1972.  Back then Orlando had the reputation of being a mostly white, Anglo-Protestant community that was largely racially and ethnically intolerant.  The “N” word was used freely.  African-American communities were walled off from the community at large by the construction of highways.  When a section of town became desirable, they were effectively disenfranchised from their land and homes through the coordination of real estate developers, local politicians, lawyers, and judges.  The orange groves and vegetable fields used migrant labor.  At first these were also African American, but Hispanic laborers also entered the picture back in the 1960s and 1970s.  The Edward R. Murrow documentary Harvest of Shame from 1960 chronicled the lives of migrant workers during this early period, a system that still lived on into the ’70s and early ’80s.  When the United Farm Workers union began to organize, the large orange and agricultural companies hired Pinkertons and Wackenhut men to break them up, often recruiting the more athletic students from the surrounding colleges like Stetson to do some of their dirty work.

For a New Jersey boy looking to make Florida home (this was before I decided to make the United States Navy my home) the old saw back then was there were two types of Yankees:  Yankees and Damned Yankees.  It was said that the difference was that the latter category wouldn’t go back home.  This was a traditional stance in Florida, which in its advertising from the days of Henry Flagler and his Florida East Coast Railway, and Henry Plant and his Plant System of railroads, steamboats, and steamships, beginning in the 1880s, sought to remake Florida from a hostile land of uninhabitable swamps, a hotbeds of the Confederacy and rampant racism, to a vacation playland, constructed to draw the new disposable income of a growing middle class from the urban and suburban communities from the north.  Thus was established the tradition of high profits for the rich developer and low pay for the workers.

But things have changed.

The introduction of air conditioning and the Civil Rights movement began transforming parts of the American south in the 1950s toward more emigration and diversity, at first confined to the coastal communities of Miami, Ft. Lauderdale, West Palm Beach, and Tampa, but the pace and geographical reach of the change has accelerated.  Orlando has been part of that transformation.  I saw it through my parents, who resettled from New Jersey and called Orlando home for over 20 years.

Our community today is a bastion of tolerance and diversity in a state that still, all too often, is a sea of intolerance, bigotry, and privilege.  For those who come to the town of Orlando they are impressed with our modern urban downtown district, our world class healthcare facilities (which transformed themselves from places of mediocrity and exclusion), and our beautiful red-bricked road, oak tree lined historic neighborhoods of hanging Spanish moss containing pretty cottages and early vernacular homes.  For foodies, we have world-class chefs opening new restaurants.  We also have vibrant neighborhoods of new ethnic immigrants.  We have a Vietnamese neighborhood and Hispanic neighborhoods from multiple cultures: Mexican, Puerto Rican, Cuban, and others.  We have beautiful parks and pedestrian-friendly neighborhoods.  We have world class musical venues, night clubs, and professional sports franchises.  We have havens for people who otherwise are shunned by the intolerant.

Most importantly, the overwhelming majority of my fellow Orlandoans are friendly and respectful.  It is a community that I have found doesn’t care a whit about one’s geographical, racial, or ethnic origins, their sexual orientation, or their religious beliefs.  It is not, however, utopia.  There are great chasms of class differences that reflect the larger society–the tradition of low pay and union busting continuing to this day.  There are frictions along the edges as the community as it accepts and incorporates new emigrants and cultural traditions.  We also still have overhanging racial problems borne of exploitation, neglect, and prejudice.

But on the whole, Orlando in the year 2016 is a good place to live and work, it addresses its shortcomings and embraces change while largely working to preserve what is good–and I have lived in communities across the country and traveled around the world against which to compare it.

What has impressed me most about the reaction of my adopted home is the outpouring of love and support to the victims and their families, and to the First Responders made up of local, state, and national law enforcement, but especially, of our medical professionals.  The community is shocked, but rather than anger and intolerance, they have responded by giving blood, donating food and other supplies, being a little kinder in their encounters with their neighbors, more courteous on the morning commute or in encounters on the sidewalk, and are coming to grips with actions that are both inexplicable and horrific.  More importantly, they are seeing the act at Pulse for what it is–a hate crime against our vibrant LGBTQ community and what it means to our town.

For just down the road, barely half an hour drive away, politicians in Florida communities have used the old scare tactics of pedophilia and rape against transgender bathroom usage and, as such, have cultivated an environment of intolerance and hostility to that community.  The Florida Attorney General fought tooth and nail against gay marriage, and is now having a hard time living up to her words and actions.  Now, thanks to that intolerance and bigotry, the partners of the victims–who are not recognized as such–cannot obtain information about their loved ones.  Gay men giving blood face discrimination based on the illogical fear of AIDS.

People are also recognizing that the widespread availability of powerful firearms meant for war and conflict will only guarantee that we’ll be mourning for other victims again.  It must stop.  It all must stop.

I am overwhelmed with a feeling of sadness for what has happened.  Sadness, love, and support is leading to thought.  Thought and reflection will lead to determined action.

The hate-filled speech coming from some quarters is not having much of an effect here.  Attributing the actions of a first generation American to his ethnic heritage is bigotry and we’ve had enough of that.  One need only substitute the heritage of that individual for the heritage of any other person who commits a crime to see the stupidity of the logic behind it.  There was a day when my own swarthy ancestors of Italian and Jewish origins were similarly tarred.  No group has a corner on integrity or wickedness.

Life does go on and in the near future I will write about the technical aspects of my discipline and my ideas to improve it.  But, for now, here is a document about my little town and life in it in the face of monstrous acts.