The Augmented Project Manager

This article was originally published on our CIO Magazine blog, the Effective Enterprise.  After seeing recent industry presentations on bots, machine learning and artificial intelligence (AI), I see the application of these technologies changing the practice of project management. The question is, is this future desirable or will we have a choice?

The project manager role and the growing role of Artificial Intelligence

Much of the daily work of a project manager has not dramatically changed over the last 30 years. We may use different management methodologies, but we spend a great deal of time manually collecting and disseminating information between the various roles on a project. This effort directly results from the need to fill the information gaps caused by systems that can’t capture what is truly happening within the organization. In a recent PMI sponsored roundtable discussion, missing or incorrect data was highlighted as a significant issue. Today’s systems are totally dependent on human entry of information, where it can be nuanced or simply not entered.

The combination of artificial intelligence in the form of bots and cloud computing could radically change this situation. PM effectiveness would be dramatically enhanced and likely the need for some PM roles diminished. In the future, as data capture becomes richer and more automated, we may see new adviser services that arise from improved data quality and completeness. I foresee significant improvements in three key areas.

Planning

One of the black arts of project management is predicting the future, where we represent this future state as a new project plan. We draw upon our own domain and company experience to determine the steps, resources and time needed to accomplish the goal. Our success rate at predicting the future is not good. Our predictions are fraught with error due to the limits of our experience and that of the organization. If you’ve ever managed a project for something completely new to an organization, you are familiar with this situation.

Imagine if your scheduling bot generates a proposed project plan, based on the aggregated and anonymized experiences of similar sized companies doing the same type of project. Today, we use tools like Monte Carlo to simulate this information. The bot could incorporate real world data, potentially yielding better results.

Benchmarking of business data has been around for some time. These new cloud capabilities could see bench-marking expanded to include real-time project management data.

Resource allocation

Another common challenge of project managers is that of resource constraints. Imagine a world where your resource pool is the world and it’s as easy to use as Amazon.
We are seeing the continued growth of the freelance nation trend in corporations. Currently, corporations use agencies to locate and recruit talent. Agencies may simply be a stopgap as bots become a more efficient clearinghouse of freelancer information. Staff augmentation agencies could become obsolete.

For example, your resourcing bot determines that you need a social media expert on your project on April 5th for two days of work. It searches data sources like LinkedIn and your public cloud calendar to find a list of suitable and available candidates. Three are West Coast of the U.S., one is in Paris and one is in Sydney. It then automatically reaches out to these candidates with offers. If multiple people accept, it automatically manages the negotiation. Once complete, the planning bot is informed, a virtual desktop with requisite software is provisioned, user login credentials are generated and the specific task information is sent to them. When the job is complete and rated as satisfactory, the bot coordinates with your accounts payable system to pay the freelancer. The planning bot automatically updates the plan and pushes the data to the BI dashboards.

Tracking

Project feedback loops on work are awful. The largest challenge is incomplete data, which results from increasingly fragmented work days, limits of the worker’s memory and tools that rely on human input. It is also incomplete as it serves little benefit to the person entering the data.

Workers are overwhelmed with tasks arriving via multiple communication channels and no consolidated view.

Imagine a world where the time sheet is antiquated. Today, we have systems such as Microsoft Delve that know what content you’ve touched. We have IP-based communication systems that know what collaborations you’ve conducted. We have machine learning capabilities that can determine what you’ve discussed and the content of the documents you’ve edited. This week, we have facial recognition capabilities and other features that can track and interpret your movements. Given all of this, why is a time sheet necessary?

Professional athletes use this type of data in the competition setting to improve their performance, using the data feedback to spot areas of development. Combining this activity information could prove a boon to productivity.
I can see this working as a “Fitbit” type feedback loop that helps the worker be better at their job and allows them to get home on time. Doing so provides direct benefit to the employee and reduces the Big Brother feel of this data.

The personal bot acts as a personal assistant, reminding the worker of tasks mined from meeting notes and marking tasks as complete in real time. All the while, it is also keeping track of the time spent that enables to the worker to get a better picture of how they spent their time.

Brave new world

There are many challenges with the view I’ve presented above. Many of these challenges are the same faced when we automated and integrated procurement processes. It is also hard to deny that there is compelling opportunities to improve the worker lives as well. Bots, machine learning and artificial intelligence are reachable capabilities that should be incorporated in the PM toolbox as you plan your organization’s future work management needs.

I look forward to reading your viewpoints and experiences in the comments below.

 

 

3 problems tracking operations in Project – Part 2

In part 1, we outlined the problems that many customers encounter managing operational work in Project. We need to capture this information for higher fidelity resource capacity. However, previous methods required more work on the part of Resource Managers than desired.

In this article, you’ll see how to set up an operational project plan template that enables you to treat each week as a distinct unit. You’ll also see how to set up the project in Project Professional and how to configure an Operational Project Enterprise Project Type.

Part 3 will also show how to set up the project using PWA only. The requirement is to enable the Resource Manager to manage the project easily via Project Web App. You’ll also be taken through how to use this setup in a day to day fashion and will see timesheet and approval configuration. Lastly, you’ll see how this information appears via Power BI reporting.

If you have comments or questions, please leave them below in the comments field.

 

3 problems tracking operations in Project, and how to fix them.

Many organizations struggle to manage resource capacity. If they are following the OPRA Resource Capacity model, the need to track recurring operations work immediately becomes necessary. This article is based on real world experiences while managing large Project Implementations. Current tracking methods will be examined and some suggested approaches will be presented.

The old way of tracking Operations has issues.

In the past, the primary method used to track recurring operations work is to create a project that contains a yearlong task with all members of a support team. The theory is you can track all operations easily for the fiscal year, which many companies use as boundaries.

However, this approach makes three core assumptions, which causes numerous headaches for the operations manager.

  • Operations work is just like Project work
  • You will always use the same amount of operations work every week
  • No one will join or leave your operations team during the year

Myth #1: Operations work is like Project work

Project work is scheduled for a given week, team members do the work and status is reported. This is where the similarities between Project and Operations work ends.

If you do less work than scheduled in true Project, the incomplete work is typically moved forward into the following week. If you do more work than scheduled, the finish date should come in.

Operations work, however, is about reserving resource capacity for a type of activity. Thus, the difference in how we treat time variation is where the treatment of Project work and Operations work diverges.

If you go over or under time on a given week for an Operations task, it has no impact on the future of the task. You don’t move unutilized operations time forward as you don’t get that unutilized capacity back. You don’t move the end date in if you use more than planned. You simply record what was used, usually for a given week.

Therefore, each reporting period for Operations work should be treated as discrete tracking entities that have no forward schedule impact and preferably, can be closed.

Myth #2: Level of effort never varies

The reality is that the level of operations work varies week to week, sometimes greatly. There are times during the year where you know there’s more operations time. For example, a year end close process might be extremely taxing for the Finance support team. The ability to capture this seasonality would improve the ability to manage capacity for project work tremendously.

Also, if you are using planned hours on Operations work faster than originally planned, using the one long task will result in support calls. You may enter October with no remaining time left, resulting in the task disappearing from timesheets.

This again points to a need for discrete tracking entities that can be managed individually for a given time frame.

Myth #3: Teams never change

The year long task has a serious user management issue when it comes to tracking team composition. Adding and subtracting team members to the task requires Project Pro and a fair bit of Project knowledge to do properly.

When Heather joins the team in August and the operations task started in January, how easy is it to add Heather in a way that doesn’t mess up the current team tracking? The same is true if Sanjay leaves the team in April. How do you easily remove his remaining time?

This process is typically beyond the training of most Operations Managers. They shouldn’t need to be a tool expert to simply manage their team as this creates a situation that detracts value from the data.

The one long task also doesn’t lend itself to adjusting operations assignments so that you can easily reflect greater project demands in key weeks.

All of these usability questions lead us to a requirement that the solution should be usable by a user in Project Web App and doesn’t require a PMP to execute.

Requirements synopsis

Our desired Operations management solution should be:

  • Discretely managed, such that variances in time entered do not impact the overall timeline
  • Ability to individually adjust the time and team composition of tracking periods
  • Straightforward to manage, using only PWA

In our next post, a suggested solution that meets these three requirements will be presented. You’ll also see examples how it can be used in real-world settings. If you have a question or comment, feel free to post it below.

The Benefit of One Simple Change in Project Online

What we typically do today.

This post is about what can be gained from making one simple change in your Resource Custom Field configuration. Many companies have an FTE Yes/No type resource custom field to distinquish between employees and contractors. This has been the approach for many years, as I see this many times in Project 2007/2010/2013 implementations.

A better approach.

I suggest using a Resource Company custom field instead. You would create a lookup table, Resource Companies, where you include the name of your company and the company names of all of your contractors. You then attach this lookup table to the new Resource Company custom field. Lastly, update the value for all resources, where FTEs are set to your company name and contractors are set to their individual company name.

So why do this?

This simple change creates richer data for reporting. For example, your vendor management team is in the process of renegotiating a contract with a specific vendor. That vendor is supplying a number of contractors across many projects. If you use this approach, the ability to extract how much work this vendor is doing and which projects are they working on is as easy as setting a simple filter.

For employees, you may even consider making the lookup value hierarchical, so that you can enable your company name.your division name reporting.

Are you using this approach today? If so, please share your experiences in the comments.

The Truth Shall Make You Miserable

Lack of Faith - Vader- Project Dashboards

When companies begin making their data more accessible via Self-Serve Power BI, they soon reach a critical break point in those efforts. The Project dashboards tell them something that isn’t pleasant or doesn’t match the narrative been publicized.

The Reality in Your Project Dashboards

Performance indicators go red. The data shows the stellar progress that was planned isn’t happening. Operational demands for time are much higher in reality than assumed in planning. In short, it shows the harsh reality, as captured in the data.

This is a moment of truth for organizations. Are we going to embrace the transparency or will we attempt to control the narrative?

Data Quality Challenges

The first question is normally, is this data accurate? This is quite reasonable to ask, especially at the beginning the data stream may not be as clean as it should be.

The approach to this answer can decide your success going forward. For some, questioning the data is a prelude to dismissing the use of the data. For others, it’s a starting point for improvement.

The data deniers will provide many reasons why “we can’t use the data.” They will complain that the data is inaccurate or incomplete. Therefore, they can’t trust their data to integrate its use into their daily work or to use it to make decisions.

These data deniers may have other hidden reasons for their position, such as political or power base protection reasons. Moving to data-centric culture is a big change for many organizations, as you have to be open about your failures. No company is always above average in every endeavor.

Data deniers also fear how business intelligence might impact their careers. If the corporate culture is such where punishment is meted out when the numbers and updates aren’t desirable, likely data transparency won’t be welcome.

Change the Focus of How Data is Used to Succeed

The key to overcoming the data fear is to change the intent for its use, moving the focus from punishment to improvement.

For the successful companies using data, they embrace two simple facts. One, the data is never perfect and that it doesn’t have to be to effect a positive change. Two, they’ve defined the level of granularity needed in the data to be used successfully.

How Imprecise Data is Changing the World

We see this approach in our personal lives. For example, the Fitbit device is not 100% accurate or precise. Yet, millions are changing their behavior of being more active because of the feedback that it provides. based on relatively decent data. You may also be carrying a smart phone, which also tracks your steps. Between the two, you would have a generally good idea of how many steps you took today.

From a granularity approach, we aren’t generally worried about whether I took 4103 steps or 4107 steps today. We took 4100 steps. Hundreds is our minimum granularity. It could easily be at the thousands level, as long as that granularity meets your information needs.

Cost Benefit of a Minimum Level of Granularity

One area we see this type of data accuracy dispute in the corporate world is with cost data. It’s been engrained in our psyche that we have to balance to the penny. Our default data granularity is set to the cent.

While that may improve accuracy and precision, it doesn’t make a material difference in the impact. For example, if your average project budget is $2M, then worrying about a 5 cent variance is a percentage variance of 0.0000025%. I’ve seen organizations who get wrapped up in balancing to the penny and waste an inordinate amount of time each week getting there.

Instead, let’s define a minimum granularity in the data such that a 1% variance is visible. For a $2M average, you would round up at the $10,000 point. Doing so then reduces work attempting to make the data perfect. Any variances of that size are significant enough to warrant attention and are more likely to stand out.

Implementing Self-Server BI using products like Microsoft Power BI and Marquee™ Project Dashboards will enable your organization to gain great improvements as long as they are willing to accept the assumptions above. The truth may make you miserable in the short term as you address underlying data and process challenges. In the long run, you and your company will be better served.

Please share your experiences in the comments below.

Use metadata to drive Microsoft Project reporting logic

The need to extract Microsoft Project Task level data in an efficient manner is growing as many Project Server and Project Online clients are creating Power BI models over this data. Unfortunately, many did not account for this BI need when creating their project template structures. This leads to Project template designs that make it difficult or impossible to extract usable data from the Project Server/Online data store.

Microsoft Project Task names should not drive meaning outside of the project team

One common issue is making the task names in your project template meaningful to needs outside of the project team. You might have standard task names for Finance or for the PMO for example.

If you have told your PMs that they cannot rename or add tasks to their plans, you have this issue. You have encoded information into the structure of the project plan. The issue is that this way of encoding makes it very difficult to extract data easily using tools like SSRS and Power BI.

We’ve seen this before, when Content Management Systems were new

This was a common problem early on in file systems and SharePoint implementations in the 90s and 00s. A few of you may remember when we had to adhere to these arcane file naming conventions so that we could find the “right” file.

For example, you had to name your meeting notes document using a naming convention like the following. Client X – Meeting Notes – 20010405 – Online.doc. If you accidentally added a space or misspelled something, everything broke.

Metadata, a better approach

With the advent of search, we were able to separate the data from the metadata. This encoding of metadata into the file name data structure went by the wayside. Instead, we now use metadata to describe the file by tagging it with consistent keywords. Search uses the tags to locate the appropriate content. We also do this today for nearly all Internet related content in hopes that Google and Bing will find it.

If we reimagine Project Business Intelligence as a specialized form of search, you see that the metadata approach works to ensure the right information can be found without encoding data into the project plan structure. There are many benefits to using this approach.

Example: Phase 1 tasks encoding before

For example, today I might have the following situation, where the phase information is encoded into the structure.

image

Example: Phase 1 tasks encoding after

The metadata approach would yield the following structure instead.

image

Metadata benefits

The biggest benefit is agility. If your business needs change, you can your data tagging strategy quickly without requiring restructuring all of the projects. You can roll out a new tagging strategy and the PMs can re-tag their plans in less than a day.

Another benefit is consistency. Using Phase and TaskID, I can extract the Phase 1 tasks consistently from across multiple projects. This also has the side effect of reducing the PMO’s auditing effots.

You can better serve the collaboration needs of the project team while still meeting the demands of external parties. Project plans are simply the notes of the latest state of the conversation between members of the project team. It is intended for servicing their communication and collaboration needs. The PM is now free to structure the plan to serve the needs of their project team. They simply have to tag the tasks accordingly, which is a minimal effort. These tags can be used to denote external data elements such as billable milestones, phase end dates, etc.

Lastly, the plan structure makes better sense to the team and is easier for them to maintain. Top level tasks become the things that they are delivering instead of some abstract process step. The task roll-up provides the health of and progress toward a specific deliverable.

How do I implement project metadata in Microsoft Project?

It requires three steps in Project Server/Online.

  1. Create a metadata value lookup table
  2. Create a task custom field (you may need more than one eventually, but start simple)
  3. Add this metadata field to your Gantt views for the PM to see and use

Note: Don’t use multi-value selection for this need as this creates complexities in the BI solution.

Below is an example of a lookup table created to support this metadata use. One use of it was to support a visualization of all implementation milestones for the next month across the portfolio. The query looked for all milestones with a Reporting Purpose equal to “Milestone.Implementation” to extract the appropriate milestones.

To create a task custom field and lookup table, please refer to this link for the details. Note, you can use the same approach in Microsoft Project desktop using Outline codes.

Metadata Lookup Table

The Reporting Purposes lookup table supports two levels of values. This enables multiple classes of tags, such as milestones and phases. This exercise focuses on the Milestone.Implementation value.

clip_image002

Metadata Custom Field

Create the Reporting Purpose task custom field and attach it to the Reporting Purposes lookup table. Specify that Only allow codes with no subordinate values is selected. This prevents the user from selecting Milestones without selecting a more specific purpose.

clip_image004

I hope you find this article useful. Please post questions and comments below.