Who owns new capabilities in your organization?

I recently participated in SharePoint Saturday – Charlotte where I talked to many people about Power BI and how it could fit in, within their own organization.

I heard the refrain, “No one really owns BI in our organization.” many times along the way. I found this concerning. Many organizations have product owners but not organizational capability owners. This makes sense from a budgeting and management perspective, but it prevents the organization from leveraging the true value of their technology investment. I hear the same refrain when I talk about Yammer or Teams. If there’s no internal advocate for a capability, how will an organization ever adopt it successfully?

“No one really owns BI in our organization.”

Heard at SharePoint Saturday – Charlotte

Tool-Centric isn’t the way

Tool-centric management focus can lead to disappointing internal adoption of a tool. Support teams aren’t typically responsible for driving adoption of a tool but rather, ensure the tool is working. An internal advocate must be present to understand and drive the organizational change process, which assumes the company has both the appetite and investment resources to make the behavior change.

I see great sums of money spent on licensing, but short shrift given to funding the necessary work of upgrading the organization. If you take a “build it and they will come” approach, many will never make the trip. It takes work and passion to figure out how to utilize a tool to make your day to day work better and most people don’t have the time or bandwidth to do this work.

New technologies will require new approaches

As we see new capabilities like Artificial Intelligence, Business Intelligence, and Collaborative Intelligence come into the mainstream, these capabilities are usually comprised of several tools. They also require effort to augment these capabilities into the day to day workflow. As such, the old technology product centric model isn’t going to work in this new changing world. It’s time to rethink the approach now.

“If there’s no internal advocate for a capability, how will an organization ever adopt it successfully?”

This is beginning to happen at Microsoft as well. One Microsoft is an overarching message around capabilities. The Microsoft’s Inner Loop-Outer Loop model comprises many tools for a few key scenarios. I’m hopeful that this is Microsoft’s first step toward communicating what they can do from a capability rather than a product perspective. For example, as a partner and a customer, I’d rather hear a consolidated message around better decision making than several separate Power BI/Cortana Analytics/Azure product pitches where I must figure out the “happy path” for myself. Let’s hope this trend continues.

Organizations need capability advocates for areas like Business Intelligence, Portfolio Management, Team Work, and many others. This role is necessary for thought leadership on where to invest in new technologies and how best to leverage these technologies to provide new capabilities or streamline existing efforts. Without this advocacy, it will be difficult to realize full value from your technology investment. The days of one tool to one capability are long in the rear-view mirror.

3 problems tracking operations in Project, and how to fix them.

Many organizations struggle to manage resource capacity. If they are following the OPRA Resource Capacity model, the need to track recurring operations work immediately becomes necessary. This article is based on real world experiences while managing large Project Implementations. Current tracking methods will be examined and some suggested approaches will be presented.

The old way of tracking Operations has issues.

In the past, the primary method used to track recurring operations work is to create a project that contains a yearlong task with all members of a support team. The theory is you can track all operations easily for the fiscal year, which many companies use as boundaries.

However, this approach makes three core assumptions, which causes numerous headaches for the operations manager.

  • Operations work is just like Project work
  • You will always use the same amount of operations work every week
  • No one will join or leave your operations team during the year

Myth #1: Operations work is like Project work

Project work is scheduled for a given week, team members do the work and status is reported. This is where the similarities between Project and Operations work ends.

If you do less work than scheduled in true Project, the incomplete work is typically moved forward into the following week. If you do more work than scheduled, the finish date should come in.

Operations work, however, is about reserving resource capacity for a type of activity. Thus, the difference in how we treat time variation is where the treatment of Project work and Operations work diverges.

If you go over or under time on a given week for an Operations task, it has no impact on the future of the task. You don’t move unutilized operations time forward as you don’t get that unutilized capacity back. You don’t move the end date in if you use more than planned. You simply record what was used, usually for a given week.

Therefore, each reporting period for Operations work should be treated as discrete tracking entities that have no forward schedule impact and preferably, can be closed.

Myth #2: Level of effort never varies

The reality is that the level of operations work varies week to week, sometimes greatly. There are times during the year where you know there’s more operations time. For example, a year end close process might be extremely taxing for the Finance support team. The ability to capture this seasonality would improve the ability to manage capacity for project work tremendously.

Also, if you are using planned hours on Operations work faster than originally planned, using the one long task will result in support calls. You may enter October with no remaining time left, resulting in the task disappearing from timesheets.

This again points to a need for discrete tracking entities that can be managed individually for a given time frame.

Myth #3: Teams never change

The year long task has a serious user management issue when it comes to tracking team composition. Adding and subtracting team members to the task requires Project Pro and a fair bit of Project knowledge to do properly.

When Heather joins the team in August and the operations task started in January, how easy is it to add Heather in a way that doesn’t mess up the current team tracking? The same is true if Sanjay leaves the team in April. How do you easily remove his remaining time?

This process is typically beyond the training of most Operations Managers. They shouldn’t need to be a tool expert to simply manage their team as this creates a situation that detracts value from the data.

The one long task also doesn’t lend itself to adjusting operations assignments so that you can easily reflect greater project demands in key weeks.

All of these usability questions lead us to a requirement that the solution should be usable by a user in Project Web App and doesn’t require a PMP to execute.

Requirements synopsis

Our desired Operations management solution should be:

  • Discretely managed, such that variances in time entered do not impact the overall timeline
  • Ability to individually adjust the time and team composition of tracking periods
  • Straightforward to manage, using only PWA

In our next post, a suggested solution that meets these three requirements will be presented. You’ll also see examples how it can be used in real-world settings. If you have a question or comment, feel free to post it below.

The Truth Shall Make You Miserable

Lack of Faith - Vader- Project Dashboards

When companies begin making their data more accessible via Self-Serve Power BI, they soon reach a critical break point in those efforts. The Project dashboards tell them something that isn’t pleasant or doesn’t match the narrative been publicized.

The Reality in Your Project Dashboards

Performance indicators go red. The data shows the stellar progress that was planned isn’t happening. Operational demands for time are much higher in reality than assumed in planning. In short, it shows the harsh reality, as captured in the data.

This is a moment of truth for organizations. Are we going to embrace the transparency or will we attempt to control the narrative?

Data Quality Challenges

The first question is normally, is this data accurate? This is quite reasonable to ask, especially at the beginning the data stream may not be as clean as it should be.

The approach to this answer can decide your success going forward. For some, questioning the data is a prelude to dismissing the use of the data. For others, it’s a starting point for improvement.

The data deniers will provide many reasons why “we can’t use the data.” They will complain that the data is inaccurate or incomplete. Therefore, they can’t trust their data to integrate its use into their daily work or to use it to make decisions.

These data deniers may have other hidden reasons for their position, such as political or power base protection reasons. Moving to data-centric culture is a big change for many organizations, as you have to be open about your failures. No company is always above average in every endeavor.

Data deniers also fear how business intelligence might impact their careers. If the corporate culture is such where punishment is meted out when the numbers and updates aren’t desirable, likely data transparency won’t be welcome.

Change the Focus of How Data is Used to Succeed

The key to overcoming the data fear is to change the intent for its use, moving the focus from punishment to improvement.

For the successful companies using data, they embrace two simple facts. One, the data is never perfect and that it doesn’t have to be to effect a positive change. Two, they’ve defined the level of granularity needed in the data to be used successfully.

How Imprecise Data is Changing the World

We see this approach in our personal lives. For example, the Fitbit device is not 100% accurate or precise. Yet, millions are changing their behavior of being more active because of the feedback that it provides. based on relatively decent data. You may also be carrying a smart phone, which also tracks your steps. Between the two, you would have a generally good idea of how many steps you took today.

From a granularity approach, we aren’t generally worried about whether I took 4103 steps or 4107 steps today. We took 4100 steps. Hundreds is our minimum granularity. It could easily be at the thousands level, as long as that granularity meets your information needs.

Cost Benefit of a Minimum Level of Granularity

One area we see this type of data accuracy dispute in the corporate world is with cost data. It’s been engrained in our psyche that we have to balance to the penny. Our default data granularity is set to the cent.

While that may improve accuracy and precision, it doesn’t make a material difference in the impact. For example, if your average project budget is $2M, then worrying about a 5 cent variance is a percentage variance of 0.0000025%. I’ve seen organizations who get wrapped up in balancing to the penny and waste an inordinate amount of time each week getting there.

Instead, let’s define a minimum granularity in the data such that a 1% variance is visible. For a $2M average, you would round up at the $10,000 point. Doing so then reduces work attempting to make the data perfect. Any variances of that size are significant enough to warrant attention and are more likely to stand out.

Implementing Self-Server BI using products like Microsoft Power BI and Marquee™ Project Dashboards will enable your organization to gain great improvements as long as they are willing to accept the assumptions above. The truth may make you miserable in the short term as you address underlying data and process challenges. In the long run, you and your company will be better served.

Please share your experiences in the comments below.

Use metadata to drive Microsoft Project reporting logic

The need to extract Microsoft Project Task level data in an efficient manner is growing as many Project Server and Project Online clients are creating Power BI models over this data. Unfortunately, many did not account for this BI need when creating their project template structures. This leads to Project template designs that make it difficult or impossible to extract usable data from the Project Server/Online data store.

Microsoft Project Task names should not drive meaning outside of the project team

One common issue is making the task names in your project template meaningful to needs outside of the project team. You might have standard task names for Finance or for the PMO for example.

If you have told your PMs that they cannot rename or add tasks to their plans, you have this issue. You have encoded information into the structure of the project plan. The issue is that this way of encoding makes it very difficult to extract data easily using tools like SSRS and Power BI.

We’ve seen this before, when Content Management Systems were new

This was a common problem early on in file systems and SharePoint implementations in the 90s and 00s. A few of you may remember when we had to adhere to these arcane file naming conventions so that we could find the “right” file.

For example, you had to name your meeting notes document using a naming convention like the following. Client X – Meeting Notes – 20010405 – Online.doc. If you accidentally added a space or misspelled something, everything broke.

Metadata, a better approach

With the advent of search, we were able to separate the data from the metadata. This encoding of metadata into the file name data structure went by the wayside. Instead, we now use metadata to describe the file by tagging it with consistent keywords. Search uses the tags to locate the appropriate content. We also do this today for nearly all Internet related content in hopes that Google and Bing will find it.

If we reimagine Project Business Intelligence as a specialized form of search, you see that the metadata approach works to ensure the right information can be found without encoding data into the project plan structure. There are many benefits to using this approach.

Example: Phase 1 tasks encoding before

For example, today I might have the following situation, where the phase information is encoded into the structure.

image

Example: Phase 1 tasks encoding after

The metadata approach would yield the following structure instead.

image

Metadata benefits

The biggest benefit is agility. If your business needs change, you can your data tagging strategy quickly without requiring restructuring all of the projects. You can roll out a new tagging strategy and the PMs can re-tag their plans in less than a day.

Another benefit is consistency. Using Phase and TaskID, I can extract the Phase 1 tasks consistently from across multiple projects. This also has the side effect of reducing the PMO’s auditing effots.

You can better serve the collaboration needs of the project team while still meeting the demands of external parties. Project plans are simply the notes of the latest state of the conversation between members of the project team. It is intended for servicing their communication and collaboration needs. The PM is now free to structure the plan to serve the needs of their project team. They simply have to tag the tasks accordingly, which is a minimal effort. These tags can be used to denote external data elements such as billable milestones, phase end dates, etc.

Lastly, the plan structure makes better sense to the team and is easier for them to maintain. Top level tasks become the things that they are delivering instead of some abstract process step. The task roll-up provides the health of and progress toward a specific deliverable.

How do I implement project metadata in Microsoft Project?

It requires three steps in Project Server/Online.

  1. Create a metadata value lookup table
  2. Create a task custom field (you may need more than one eventually, but start simple)
  3. Add this metadata field to your Gantt views for the PM to see and use

Note: Don’t use multi-value selection for this need as this creates complexities in the BI solution.

Below is an example of a lookup table created to support this metadata use. One use of it was to support a visualization of all implementation milestones for the next month across the portfolio. The query looked for all milestones with a Reporting Purpose equal to “Milestone.Implementation” to extract the appropriate milestones.

To create a task custom field and lookup table, please refer to this link for the details. Note, you can use the same approach in Microsoft Project desktop using Outline codes.

Metadata Lookup Table

The Reporting Purposes lookup table supports two levels of values. This enables multiple classes of tags, such as milestones and phases. This exercise focuses on the Milestone.Implementation value.

clip_image002

Metadata Custom Field

Create the Reporting Purpose task custom field and attach it to the Reporting Purposes lookup table. Specify that Only allow codes with no subordinate values is selected. This prevents the user from selecting Milestones without selecting a more specific purpose.

clip_image004

I hope you find this article useful. Please post questions and comments below.

Controlling Chaos: Calculating Your Project Contingency Budget

I was sitting in my graduate level statistics class when it hit me, Expected Value calculations could be used to solve my project contingency budget problem!

My Quandary

Earlier that same day, I was sitting in a project review with my senior management. The project team had identified a number of risks with the project. I included a contingency budget task in the schedule based on that qualitative risk assessment. The management challenged me on this inclusion and said the team was artificially inflating the estimates.

The project itself involved a lot of moving pieces, with external vendors, timed deliveries of equipment and geographically dispersed personnel. The project ran nine months, ending in October. We had factored in things like vendor delays, employee sick time, etc. I thought the risk analysis was a reasonable precaution. However, management said “Prove it!”

I Need Days, Not Rankings

Every risk assessment article I had seen at the time involved the use of a qualitative risk ranking (High, Medium, and Low). This qualitative assessment didn’t meet my needs to prove that the amount of contingency was correct.

What I needed was a way to quantify the risk in days, so that I could create a task of X days to track the contingency. It was also the first time this organization had seen this type of risk analysis so the analysis needed to be effective but not overly complex.

Let’s Get Statistical

Back in my 3½ hour statistics class, we also reviewed the Pareto principle in that 80% of your outcome impacts are likely the result of 20% of your events. We also discussed Expected Value calculations and Normal Distribution and how all of these techniques could be used together.

Normal Distribution, which you may know as the “Bell Curve”, occurs in many settings and charts the probability distribution for a given scenario. All points on the curve has an associated Z-Score, which can be used to create an Expected Value result for a specific occurrence.

The Idea

My epiphany was that by identifying a small number of project risks and calculating the Expected Value of each risk in days, the sum of the Expected Value outcomes could be used as the duration for my contingency task. It should be big enough of a sample to cover most of the potential project variance.

Risks are a way of trying to quantify variance in your schedule. Each task finish date can be thought of as a little bell curve and the sum of those individual task finish variances decides where your final project finish date occurs.

The idea seemed to have merit. I did some reading to validate my idea and found that the disaster recovery planners do a similar calculation for assessing risk. They also add an expiration date for a given event and a number of potential occurrences for a given time frame. Expiration dates are needed, for example, if you have a risk that a vendor will not deliver some equipment on time. Once delivered, the risk expires as it is no longer needed.

Implementation Challenges

Imagine my dilemma. How the heck am I going to explain this concept to my team without it sounding like a lot of work?

Another consideration is that most people don’t think of probability in terms of a number. They use language like:

  • Very Unlikely
  • Unlikely
  • Possible
  • Likely
  • Very Likely.

You may be familiar with the word Sigma, as in Six Sigma. Sigma is a measure of variance around a mean. 97+% of all outcomes will occur between -2 Sigma and 2 Sigma. Anything beyond +/- 2 Sigma is exceptionally unlikely. Each Sigma point had a corresponding Z-Score that is the probability of an event happening at that point on the curve.

To make this user friendly, I mapped the language terms above to the Z-Scores at the corresponding -2, -1, 0, 1, and 2 Sigma points.

My core calculation is [Expected impact in days if risk occurs] * [Likelihood it will occur, Z-Score] * [the number of possible occurrences in time period] assuming that the expiration date had not passed. I needed to capture this in a spreadsheet at the time and I needed it to be easily understandable to the user.

The Tool

The resulting spreadsheet captured the Risk, the impact as measured in days, expiration date and a dropdown for likelihood that the risk will occur. We formulated a risk list of ten items and found our calculations added two days of exposure to our original estimate of contingency. Ten items seemed like a sufficient sample without creating a lot of additional work to formulate the list.

For example, one of our vendors was in Miami and had a key deliverable in late August. I grew up on the Gulf Coast and knew this was peak hurricane season. If a hurricane hit the area, they would have a week of downtime.

Originally, we were thinking this was an unlikely event. One of the team members pointed out that the National Weather Service was predicting a higher than normal number of hurricanes for the season. The team then upgraded the Risk rating to Possible. The risk was then documented as shown in the table below. We did this for each of the risks and the sum of the values was the duration of the contingency task.

Risks Screenshot

The Result

The new analysis was introduced in the next management meeting. They were dubious but they allowed us to use it in our project. As risks occurred, we kept track of them and used days from the contingency budget. We encountered a number of issues along the way, some anticipated and a number that were not.

We ended the project only one day over our contingency budget date. Considering we had 28 days of contingency, the management reaction to a 1 day slip was much more muted than communicating a 29 day slip. We also knew why we consumed 28 days of contingency, which gave management confidence that the situation was being actively managed.

Summary

I’ve used this basic technique successfully on other projects where we were able to increase project on-time rates from 35% to 70% on-time. This technique also gets your team in the right mindset as the analysis is reviewed every status meeting and gets them thinking about how to address risks proactively.

This post is part of the Chaos and the Cubicle Hero series. Other posts can be found here, here and here,

 

[tagline_box backgroundcolor=”” shadow=”yes” shadowopacity=”0.7″ border=”0px” bordercolor=”#fe8f00″ highlightposition=”top” content_alignment=”left” link=”https://clarity.fm/trebgatte” linktarget=”_blank” modal=”” button_size=”xlarge” button_shape=”square” button_type=”flat” buttoncolor=”orange” button=”CLICK TO SCHEDULE VIA CLARITY.FM” title=”Have An Immediate Project Online or Business Intelligence Need?” description=”We are now coaching and advisory services for Project Online, Project Server and Business Intelligence via Clarity.fm. Clarity.fm allows you to request specific times to meet so that we can discuss your immediate need. ” animation_type=”0″ animation_direction=”down” animation_speed=”0.1″ class=”” id=””][/tagline_box][separator style_type=”double” top_margin=”20″ bottom_margin=”40″ sep_color=”#fe8f00″ icon=”fa-shopping-cart” width=”” class=”” id=””][three_fifth last=”yes” spacing=”yes” background_color=”#e0e0e0″ background_image=”” background_repeat=”no-repeat” background_position=”left top” border_size=”0px” border_color=”” border_style=”solid” padding=”20px” class=”” id=””]

[/three_fifth]

Chaos Management and the Cubicle Hero

When asked, “What do you want to be when you grow up?” you may have replied, “Firefighter.” If you did, I’m sure you meant one of the awesome individuals who provide medical services, rescues and ride the fire trucks. While, most of us never realized that dream, there are days at the office where you probably feel that “Fire Fighter” should be your job title.

Welcome to the wonderful world of the Cubicle Hero, where fighting fires is part of your job!

Perhaps you ask yourself at the end of each day, “How did I get here?” Many feel stuck in these roles without a way out and are puzzled as to how it happened. I talked about the True Cost of the Cubicle Hero in this previous article, so let’s look at how Cubicle Heroes form.

One reason Cubicle Heroes arise is due to a work environment that isn’t structured to respond well to chaos. If there are no processes for reacting to chaos in a controlled manner, the result is a crisis, which requires some brave person to step in to address. This person is caught in that role going forward, thus evolving into the Cubicle Hero. Chaos is ever present and needed for the organization to evolve and remain competitive. The organization is going to run out of Heroes unless a systemic way of reacting is created.

Internal efforts such as implementing a new HR system creates short-term chaos and long-term impact the organization. If your organization doesn’t have a formal project transition process to production, Cubicle Heroes usually form from the project’s team members who hold the detailed knowledge about the project’s deliverables . A problem related to the project arises. This leads to a project team member solving the issue and then becoming the Hero going forward.

Ad hoc project transformation process creates “human hard drives” out of the project team members, where they must store and retrieve organizational knowledge as needed. This restricts the ability of team members to grow their skills as letting go of that knowledge results in a loss to the organization. A formal transformation process ensures relevant information is captured so that it can be widely used within the organization, freeing the team members to move on.

External events such as a large client with a new, immediate need or a viral photo of a dress of indeterminate color are also chaos sources. Does your company treat these requests as fire drills  or do they have a way to manage them?

The best companies have a deep respect for chaos and put practices in place to manage it and to learn from it. New products and services are sometimes rooted in chaos learnings. Successful chaos management becomes a source of positive change within an organization, as it provides opportunities for people to learn new skills and encounter new situations. As discussed in the earlier article, these new skills and experiences prepare these individuals to be the Explorers that we need.

If your company grows Cubicle Heroes, then the first step in the solution is to address the underlying cultural issues. Adding tools too soon will simply result in chaos at light speed. Addressing this issue is especially problematic in organizations where management has built their careers on their firefighting abilities. Cubicle Heroes tend to prosper in environments which lack visibility into cause and effect. One of my Project Management Office  tool implementations came to a grinding halt when the sponsor, who was a master Cubicle Hero, realized the system would also show that he was also the company’s biggest fire starter .

Your company’s reaction to chaos is a key process necessary to maximizing your long term competiveness and productivity. One way to address chaos is to create processes for categories of chaos. Categories help keep the process manageable without having to address each specific and unique possibility.

One category should also be “other,” as the truly unexpected will happen. One example where this was successful is an organization who assigned a team member to work the “other” category, thereby sparing the rest of the team from being randomized by the unexpected.

I’ll write more on this topic in the weeks to come. For other articles, please visit my blog at http://www.tumbleroad.com/blog.

The True Cost of the Cubicle Hero

Heroes. Society loves them, honors them and exults them. Corporate offices are filled with a new breed of hero, the Cubicle Hero. These are the people who go beyond the norm and figure it out. They burn the midnight oil and they get it done. They overcome the chaos and reach the goal. All hail the hero!

However, heroes tend to overstay their welcome. In the movie, “The Dark Knight Rises”, character Harvey Dent intones, “You either die the hero or you live long enough to see yourself become the villain.” The Cubicle Hero’s individual victory is celebrated initially, but situations change and the need for the hero diminishes over time. Or so we hope.

Cubicle heroes can become process bottlenecks and productivity killers. Why? The organization’s reward structure doesn’t lead them to being mentors. The cubicle hero has great value to the organization but their way of working can’t scale and the lack of information sharing prevents the organization from truly benefiting from their victory. The hero then gets involved in every project that touches their area and becomes the bottleneck as the demand for their time is greater than what is available. Thus, the hero slowly becomes the villain, delaying projects.

Many years ago, I worked at a company where a core process of the company was dependent on a very skilled hero. He was a great employee and did his job earnestly. However, he also guarded his knowledge so that he was the only one who understood it completely. This became a serious company concern when he was involved in an accident, leaving him unable to work for several months. Several key projects were impacted.

Changing the perspective, expectations and language of what happens as part of these efforts can lead to a different outcome. We need to make it clear that we want and need Corporate Explorers rather than Cubicle Heroes. Leif Erickson, the Viking, may have been the first to reach North America on a heroic journey, but it was the explorer, Columbus, that opened up North America to the world.

Explorers and Heroes share many common traits. They can see the big picture. They can dig down into the details when needed. They put in the extra effort to get the job done. The real difference is in the aftermath. Explorers open new trails so that others may come behind them. Explorers become guides to help others make the same journey. Heroes, on the other hand, continue to hold onto their conquest.

Changing your company culture to encourage Explorers over Heroes creates a scalable culture of knowledge sharing. This organizational approach leads to greater productivity, higher quality collaboration and timelier project progress.

To summarize, I recommend reviewing the following in your organization.

  • Provide a clear path to success for as many as possible to the rewards for exceptional effort, in a way that others and ultimately the organization can leverage
  • Provide public recognition for knowledge sharing
  • Structure rewards, within the process, so we can move from the mentality of one time hero-creation to our true goal of constant productivity improvement
  • Provide the Explorer with opportunities to help facilitate and implement their achievement within the organization. This keeps the Explorer engaged and looking for additional ways to improve
  • Provide collaborative tools like Office 365 and Yammer to help facilitate and support the Explorer’s journey

If you are ready to address more productivity issues in your organization, talk to us or join our Community.

Project Tasks are Your Lowest Priority

Project tasks are the lowest priority work you have on any given day. Wait, what?

It’s true! Strategically, we know project work is the most important future investment for the company. When you break down what you do every day, you’ll see that you are fitting in project work around the other work you have to do. It’s frustrating. You know you could be doing more. It’s frustrating because someone thought you had the time to get this work done.

If you don’t believe the premise, imagine the following scenario. You are staying late at the office to get some project work completed. Your manager’s manager sees that you are in the office, comes over, and asks you to do a task for tomorrow morning. If your answer is “I’m sorry, but I can’t because I really need to get this project work completed.”, their response will determine the relative priority of project work in your environment. For some, rejecting the task would be a career-limiting move.

Perhaps then, we are asking the wrong question when it comes to resource capacity management. Instead of asking whether this resource has free capacity to do the work, shouldn’t we be asking if the resource has enough consolidated free time to work on project work? If they do not, what can we do to remedy this situation?

In my “Done in 40” webinar, we discussed recent research by time-tracking software companies that identified how the top 10% of productive employees work in an agile fashion. These employees typically work 52 minutes and take a 17 minute break away from the work.  This is coherent with ultradian body rhythms studies from the 90’s and 00’s that showed your focus naturally waxes and wanes on a 1.5-2 hour schedule. These work sprints can make you very productive and help reduce mistakes and rework.

I’ve personally tried the sprint approach and I can say, it works well for me. I use a timer app on my Pebble watch to monitor my sprints. Fifty minutes is roughly the time where the mind starts wandering to “Did Joe ever respond to my email?” or “Is there coffee?”. Three sprints enable the top three daily tasks to get done easily.

The catch is you need to have 69 uninterrupted minutes to complete a personal sprint. This leads us back to the question of does a resource have consolidated availability? Yes, they have 3 hours available that day but if it’s in 15 minute increments, that’s not usable.

When a client with project throughput issues engages my services, I find it’s usually not a project management issue. Many times, the lack of consolidated availability is preventing the project work from happening. If you are interrupted every 10 minutes, as are most office workers in the United States, it’s very hard to get work done. If you are having issues getting projects through the pipe, perhaps it’s time to look beyond your projects and to your operational work processes.

We spend the majority of our energy providing oversight and processes to projects, which are a minority of the work instead of doing the same for operational work. McKinsey released a white paper recently that showed most of the operational spend goes to keeping the company running. New projects are a small portion of the overall effort. Yet, we don’t monitor operational work holistically the way we do projects. Perhaps, its time we start.

Project management processes are very helpful and needed. We’ve worked out how to anticipate and reduce risk and how to deliver the reward. We need to apply these approaches to how we manage all work. It’s the operational work that provides the overall context within which we do our project work. If improperly managed, it also constricts our ability to get our project work done. Operational work management improvements could yield the biggest benefit by enabling the consolidation of availability, yielding more usable time for project work.

If you are interested in finding out more about the specific techniques and how to use Microsoft Project to support this need, sign up here and get the recording link to the full “Done In 40” webinar.

Is Your Data Looking for a Problem to Solve?

Data can be a wonderful thing. There are so many stories you can tell with the right data. These stories have the power to persuade and motivate positive changes in the organization. And so the marketing stories go on and on about the power of data.

The question is, do you have the right data? Can you tell compelling stories with your data? How do you know what is possible or needed? Let’s discuss some straightforward techniques to help you determine the answer.

Many times, I’ve walked into a client’s office and was greeted with large quantities of data. However, the client was still struggling to tell a compelling story from their data store. There was always doubt as to whether the reports they generated were useful or compelling.

I’ve also encountered clients who maintained additional data as they thought someone *might* need. Unfortunately, wishful thinking is not an effective business strategy and can waste scarce company resources.

Here are some signs that you might need to rethink your data strategy.

  • Do your reports require data definitions, report keys and long explanations for someone to understand what they are viewing?
  • Do you have a training session on “how to use the reports”?
  • Are you maintaining data for which you are unsure of the purpose or use of said data?

If you answered yes to any of these, it might be time for a little housecleaning.

The Effective SimplicityTM approach dictates that we maintain as little data as possible to meet the business need. Minimizing data is a way of reducing the overall cost by reducing the overhead to manage the system. The trick is to determine what data is needed which is sometimes easier said than done.

At Tumble Road, we use an approach that follows a pattern that is understandable and helps determine the context and the need for the right data.  The pattern is Conversation – Question – Supporting Data and Activities.

Conversation is about identifying the specific meeting / use case that the tool will ultimately support. A diagram of some standard Project Management conversations are illustrated below. The conversation will have a schedule so that you can determine how often the data needs to be updated. The conversation has standard participants, which is helpful if you need feedback on the data you will need to provide. It also helps later in the case where you need to inform the report consumers of an upcoming change. Lastly, a list of 1-3 key questions is defined for the use case.

Project Communications

Key Questions are the information needs that needs to be supported. The Key Question also determines the form in which the answer must be presented.

For example, if the conversation is the weekly Tuesday portfolio status meeting between IT Management and Finance, likely, you will need to answer questions similar to:

  • What have we spent?
  • What did we plan to spend so far?
  • What are we planning to spend in the near future?

Supporting Data and Activities are the exact data elements that allow the key question to be answered and the activities necessary to generate, collect and maintain said data. The data can help you determine other incidental data, specifically necessary for organizing and filtering the Supporting Data. The activities can help you spot process gaps that may be present that would prevent you from successfully addressing the question.

When examining the What have we spent question? above, Finance wants Project spend broken down by Project, Cost Type (Capital or Expense) and Sum the totals by Fiscal Year, Fiscal Period for each IT Director.

From this short exercise, I already know the following is needed:

    • A lookup table for Cost Type with values of Capital, Expense.
    • A task custom field with Assignment Roll Down of the values enabled.
    • A lookup table for IT Directors, to maintain consistency.
    • A Project custom field for IT Director so that this can be assigned to each project.
    • To add this Project custom field to the correct Project Detail Page so that the PM can maintain this data.
    • Resource rates in the system so that costs can be automatically calculated.
    • To enter the Fiscal Periods in Project Online.
    • A project template which exposes the Cost Type column at the task level.
    • The cross-tab report layout
Director/Project/Cost Type FY2014-M01 FY2014-M02 FY2014-M03 FY2014-M04 FY2014-M05
John Smith

$100

$100

$100

$100

$100

Project X

$100

$100

$100

$100

$100

Capital

$50

$50

$50

$50

$50

Expense

$50

$50

$50

$50

$50

 

As you can see, you can generate quite a bit of actionable detail using this approach. There are several follow-up questions that can now be asked since you have specific, concrete examples from which to work. For example:

  • Do you need filtering by director?
  • Do you show only incomplete projects?
  • Is the IT Director data something that should be maintained by PMs?

This approach also aids in designing supporting processes. You already know this is a weekly review, so cost information has to be updated on a weekly basis by the PM. As the meeting happens on Tuesday, Monday is the deadline to get updates in and published.

A final advantage is that it is easy to track the progress of visible work by your users. For these engagements, the top level summary task would represent the conversation, a key question represents a sub-tasks and key activities, such as gathering required data, defining maintenance process, etc. are the third level tasks. This makes it easy to determine the “health” of the conversation implementation.

Once you are maintaining the data essential to answering the questions, with clearly defined uses, you should see a reduction in overhead as well as evidence of easier training conversations. If you would like to learn more, please join our Community here.