The Augmented Project Manager

This article was originally published on our CIO Magazine blog, the Effective Enterprise.  After seeing recent industry presentations on bots, machine learning and artificial intelligence (AI), I see the application of these technologies changing the practice of project management. The question is, is this future desirable or will we have a choice?

The project manager role and the growing role of Artificial Intelligence

Much of the daily work of a project manager has not dramatically changed over the last 30 years. We may use different management methodologies, but we spend a great deal of time manually collecting and disseminating information between the various roles on a project. This effort directly results from the need to fill the information gaps caused by systems that can’t capture what is truly happening within the organization. In a recent PMI sponsored roundtable discussion, missing or incorrect data was highlighted as a significant issue. Today’s systems are totally dependent on human entry of information, where it can be nuanced or simply not entered.

The combination of artificial intelligence in the form of bots and cloud computing could radically change this situation. PM effectiveness would be dramatically enhanced and likely the need for some PM roles diminished. In the future, as data capture becomes richer and more automated, we may see new adviser services that arise from improved data quality and completeness. I foresee significant improvements in three key areas.

Planning

One of the black arts of project management is predicting the future, where we represent this future state as a new project plan. We draw upon our own domain and company experience to determine the steps, resources and time needed to accomplish the goal. Our success rate at predicting the future is not good. Our predictions are fraught with error due to the limits of our experience and that of the organization. If you’ve ever managed a project for something completely new to an organization, you are familiar with this situation.

Imagine if your scheduling bot generates a proposed project plan, based on the aggregated and anonymized experiences of similar sized companies doing the same type of project. Today, we use tools like Monte Carlo to simulate this information. The bot could incorporate real world data, potentially yielding better results.

Benchmarking of business data has been around for some time. These new cloud capabilities could see bench-marking expanded to include real-time project management data.

Resource allocation

Another common challenge of project managers is that of resource constraints. Imagine a world where your resource pool is the world and it’s as easy to use as Amazon.
We are seeing the continued growth of the freelance nation trend in corporations. Currently, corporations use agencies to locate and recruit talent. Agencies may simply be a stopgap as bots become a more efficient clearinghouse of freelancer information. Staff augmentation agencies could become obsolete.

For example, your resourcing bot determines that you need a social media expert on your project on April 5th for two days of work. It searches data sources like LinkedIn and your public cloud calendar to find a list of suitable and available candidates. Three are West Coast of the U.S., one is in Paris and one is in Sydney. It then automatically reaches out to these candidates with offers. If multiple people accept, it automatically manages the negotiation. Once complete, the planning bot is informed, a virtual desktop with requisite software is provisioned, user login credentials are generated and the specific task information is sent to them. When the job is complete and rated as satisfactory, the bot coordinates with your accounts payable system to pay the freelancer. The planning bot automatically updates the plan and pushes the data to the BI dashboards.

Tracking

Project feedback loops on work are awful. The largest challenge is incomplete data, which results from increasingly fragmented work days, limits of the worker’s memory and tools that rely on human input. It is also incomplete as it serves little benefit to the person entering the data.

Workers are overwhelmed with tasks arriving via multiple communication channels and no consolidated view.

Imagine a world where the time sheet is antiquated. Today, we have systems such as Microsoft Delve that know what content you’ve touched. We have IP-based communication systems that know what collaborations you’ve conducted. We have machine learning capabilities that can determine what you’ve discussed and the content of the documents you’ve edited. This week, we have facial recognition capabilities and other features that can track and interpret your movements. Given all of this, why is a time sheet necessary?

Professional athletes use this type of data in the competition setting to improve their performance, using the data feedback to spot areas of development. Combining this activity information could prove a boon to productivity.
I can see this working as a “Fitbit” type feedback loop that helps the worker be better at their job and allows them to get home on time. Doing so provides direct benefit to the employee and reduces the Big Brother feel of this data.

The personal bot acts as a personal assistant, reminding the worker of tasks mined from meeting notes and marking tasks as complete in real time. All the while, it is also keeping track of the time spent that enables to the worker to get a better picture of how they spent their time.

Brave new world

There are many challenges with the view I’ve presented above. Many of these challenges are the same faced when we automated and integrated procurement processes. It is also hard to deny that there is compelling opportunities to improve the worker lives as well. Bots, machine learning and artificial intelligence are reachable capabilities that should be incorporated in the PM toolbox as you plan your organization’s future work management needs.

I look forward to reading your viewpoints and experiences in the comments below.

 

 

How to: Group Dates by Week in Power BI

This post shows you how to use the date hierarchy and the grouping function to easily group your data by year, month, week in Power BI. This can be very handy when reporting against data that is timephased, such as sales transactions or Project Online schedule and capacity data.

If you need to start your week on a specific day of the week, you can create a new column based on your date, using this DAX statement:

Week Beginning Date = ‘TableName'[DateFieldName] – MOD(‘TableName'[DateFieldName]-1,7) + [Add 0 for Sunday, 1 for Monday, etc. to start week accordingly]

How to query Project Online multiple value fields in Power BI

Power BI

This post addresses a need to arises when querying Project Online data via OData. It can also serve as a solution template for other data sources.

Synopsis

A number of projects in our portfolio have impacts in different countries. These countries are designated via a Country Project level multi-value (MV) custom field. When multiple values are selected from the dropdown list, it creates a comma-delimited list of values. For example, you could see Brazil, Canada, United States returned as a field value from OData.

Challenge

One of the challenges is that this list can be any number of values and will grow and contract over time. Therefore, the technique should automatically adjust to the underlying maximum number of specified values.

Approach

For each MV field, a new data set representing all values in the MV field will be added to the data model. This new MV field data set will be a child to the original Project master data set. The relationship between the Project set and the MV set will be 1:N.

An inline function will also be used so that it can be executed for each record in the data set. This function will perform the split of values into separate columns.

Once split, the values will be unpivoted, creating a record for each ProjectId, MV Field value pair.

Potential Issues

  • Correct data set relationships. Initially, Power BI created the MV data set with a 1:1 relationship instead of a 1:N. Once corrected, all errors went away.
  • You will need to watch refresh performance as you are basically querying the same data multiple times within the model.

Procedure

Register here for this free session to see the step by step video and to download the PBIX file containing the data model with sample code. http://academy.tumbleroad.com/courses/how-to-parse-multi-value-columns-in-power-bi

One surprising challenge that can derail your Business Intelligence implementation

One challenge facing Business intelligence implementations today is related to how data is shared within the organization. We’ve been working to make work more open and social over the last few years, but there’s one place we’ve neglected to address along the way. How do we share ad hoc business intelligence data within the organization?

 

In years past, our Business Intelligence content was developed as individual reports. This disconnected content approach enabled data producers to act as data gatekeepers. The owner of the report would decide who to share the data with and would email it to the person as they saw fit.

 

In subsequent years, we migrated to a portal based distribution strategy. Tools like SQL Server Reporting Services enabled us to post formal reports centrally, where they could be accessed. However, ad hoc reporting generally remained outside of the portal, to remain shared via email.

 

With the advent of new tools like Power BI, ad hoc reporting is easier than ever. Everything on the screen is clickable and it enables interactive data exploration. The intended interaction with these tools is similar to how we use Facebook, Twitter and Pinterest in our personal lives today. We post information and others can choose how and when to consume it.

 

Imagine the confusion that arises when a BI implementation team gets puzzling questions from groups. How do I email this data model to others? How do I print a dashboard and so forth? Simply updating the tool set to the latest generation, while ignoring certain underlying organizational behaviors can impact the long term success of implementing the latest generation BI tools.

 

An organizational assessment should be done as part of a Business Intelligence tool implementation to consider how people share data currently and as to whether those sharing habits are compatible with the new BI tools. If the organization is used to posting and forwarding links, then they should have no issues adopting the new tools. If the organization is still email centric in their information sharing, a concerted effort will be required to change habits away from sharing content via email and to posting and link sharing. These findings can be used to adjust scope around work needed for training and ongoing support.

 

What has been your experience? Please share below in the comments.

Using Project Online, Power BI Alerts and Microsoft Flow for Custom Alerts

This is so cool! I just received an alert about a data condition in Project Online from Microsoft Flow. The best thing is that this condition isn’t anything Microsoft provides out of the box. With the functionality now in Flow and Power BI, we can now construct our own using Power BI and Flow.

With recent updates, Microsoft Power BI added the ability to set alerts on the card visual. You can read more about the details here. One of the settings allows an email to be sent with a distinct subject. That got me thinking since I knew from playing with Microsoft Flow that you can connect it to Office 365 Outlook and drive flows from specific emails. Flow also provides a Push Notification function so that I can push a notification to the Flow app on my iPhone.

It seems like all of the pieces I need are there to create my own custom notification.

First, I have a Project Online instance where I’ve got projects and people assigned to those projects. I’m using Project to manage capacity so knowing when the total number of overallocations is growing helps me react accordingly.

Project Web App

Second, using our Marquee Project Dashboards product, I have a count of Overallocated resources already available. I’m using Power BI Pro, so my data model refreshes automatically multiple times a day.

Marquee Dashboard

Third, I’ve created my own dashboard and I pinned the count of Overallocated Resources to it. Once the card is on your dashboard, you can click the ellipsis menu … and access the alert function (bell icon).

When the Alert panel opens, I turned it on, changed the Alert title and set my threshold. You can also determine if you need once an hour or once a day for alerting. The key is to check the box that says to send an email. This provides a way for us to use Flow to act upon this alert.

Now, we go to http://flow.microsoft.com. Once you login (because you already have an account, right?), go to My Flows and select Create a New Flow. You are going to create a two-step flow where it connects to your email looking for a specific subject and sender and the second step is where it sends the notification.

  • Type Outlook into the Search window to see the Outlook events.
  • Select Office 365 Outlook – When a new email arrives
  • Ensure the connection is to the right account that will receive the alert email
  • Click Show Advanced Options
  • I filled in mine to look like this. Note the From and Subject Filter values.

  • Now click New Step
  • Click Add an Action
  • Type Push into the search window
  • Select Push Notification – Send a push notification
  • I filled out mine to look like the following. You could have the URL set to bring you back to Power BI or Project. I chose for simplicity to bring me to Outlook.

I would have loved to have had an SMS message generated. This would have required a Twilio account that I don’t have at the moment. So since I was playing, I took the free route and loaded the Flow app on my iPhone. The Push notification would then show up there on my phone.

Once set up,

  • I updated a Project Online project plan to create a new resource overallocation and published it.
  • Power BI automatically refreshed the dataset, the number of overallocated resources increased.
  • This value change triggered a Power BI alert and sent an alert email to my Outlook inbox.
  • Flow picked up the alert and fired off the Flow.
  • The push notification was sent to my phone and bam, there it is on my Apple Watch.

What do you think? What have you created with Flow and  Power BI? Tell me in the comments below! I hope you found this useful.

Free Marquee™ for Google Sheets Template demo

Microsoft Power BI isn’t just for getting data from Microsoft products. The PBIX demo file that you can get once you register below, allows you to query the data from your Google Sheet into Power BI and then share resulting reports and dashboards via PowerBI.com with co-workers or the world if you desire. If you have the Power BI mobile app, you now have Google Sheets data on the go.

Demo File Only

This PBIX  is provided as a demo only, with no support or warranty offered as a result. Testing was only sufficient for a demo and not for production use. You may encounter errors in your environment with the use of this model in it’s current state. You are welcome to expand the solution. If you do, please add to the comments below so that we can all share from your experience.

Note: the PBIX file only connects to the first tab in your Google Sheet.

Google Sheets API Oddities

This was an interesting project as Google Sheets doesn’t have the same concept of table as Excel does. Therefore, there’s two conditions you may encounter for which we don’t yet have a good solution.

First, you shouldn’t have a blank column heading. This will cause the model to error out on the last data transformations as Power BI expects column headings to be present.

Second, the Google Sheets API doesn’t appear to return cells that are null that are in the last column of your sheet. Since the cells are returned as a list and we fold the list every X rows, this throws off the row count and fold points. As a workaround, we recommend having the last column of data have values in all cells.

Send me the data model!

 

Setup

You need three pieces of data in order to use this PBIX file.

  • The number of columns in the Sheet
  • Your Spreadsheet ID
  • Your Browser API Key

Steps to get your SpreadsheetID

  • Navigate to your Sheets page.
  • The key is in the URL, see the bolded red text below.
    • https://docs.google.com/spreadsheets/d/1gLzc8AxdlUl1MPY4t2ATKjc1UfBNj7iUaHRexwLYSKQ/edit#gid=0.

Steps to get your Browser API Key

  • Log into your Google account
  • Click this link to start the wizard.
  • Click Continue button to create a new project.
  • Unfortunately, you have to create the following, even though we won’t use it.
  • Click Go to credentials button.
  • Select Web Browser (Javascript) in the Where will you be calling the API from?
  • Select User Data under What data will you be accessing?
  • Click What credentials do I need? button
  • Click Create client ID button
  • Enter a Product name
  • Click Continue button.
  • Click Done button.
Now to create the credential we need for this to work.
  • Click the Create credentials button.
  • Select API key.
  • Select Browser key.
  • Give it a name and click the Create button.
  • Copy the API key and paste it into the BrowserAPIKey parameter.

Setting Up Your PBIX File for Use

Once you receive your PBIT file, do the following.
  • You must have Power BI Desktop installed prior to performing this procedure.
  • In File Explorer, double-click on the Google Spreadsheet Template – Final.pbit file.
  • Power BI Desktop will open and you will be presented with this dialog.
  • Fill in the values and click the OK button.
  • The model will refresh and it should load your Google data.

Setting Up Scheduled Refresh on PowerBI.com

Once you have saved the model, verified the data and built your reports, you can publish this model to PowerBI.com. Once there, you can set it up to automatically refresh the data so that any reports and dashboards are up to date.

Procedure for Scheduled Refresh

  • In Power BI Desktop, click File, Save to save the model
  • Click Publish
  • If you aren’t signed into Power BI, you’ll be prompted to do so.
  • You may be prompted for the location to publish. My Workspace is the default
  • Once done, go to PowerBI.com.
  • Follow the procedure in the video below.
  • Navigate to the Datasets in the left navigation to start the process.
  • Note, the API key you entered earlier in the model is your login. This is why it is set to anonymous in PowerBI.com.
Send me the data model!

The Truth Shall Make You Miserable

Lack of Faith - Vader- Project Dashboards

When companies begin making their data more accessible via Self-Serve Power BI, they soon reach a critical break point in those efforts. The Project dashboards tell them something that isn’t pleasant or doesn’t match the narrative been publicized.

The Reality in Your Project Dashboards

Performance indicators go red. The data shows the stellar progress that was planned isn’t happening. Operational demands for time are much higher in reality than assumed in planning. In short, it shows the harsh reality, as captured in the data.

This is a moment of truth for organizations. Are we going to embrace the transparency or will we attempt to control the narrative?

Data Quality Challenges

The first question is normally, is this data accurate? This is quite reasonable to ask, especially at the beginning the data stream may not be as clean as it should be.

The approach to this answer can decide your success going forward. For some, questioning the data is a prelude to dismissing the use of the data. For others, it’s a starting point for improvement.

The data deniers will provide many reasons why “we can’t use the data.” They will complain that the data is inaccurate or incomplete. Therefore, they can’t trust their data to integrate its use into their daily work or to use it to make decisions.

These data deniers may have other hidden reasons for their position, such as political or power base protection reasons. Moving to data-centric culture is a big change for many organizations, as you have to be open about your failures. No company is always above average in every endeavor.

Data deniers also fear how business intelligence might impact their careers. If the corporate culture is such where punishment is meted out when the numbers and updates aren’t desirable, likely data transparency won’t be welcome.

Change the Focus of How Data is Used to Succeed

The key to overcoming the data fear is to change the intent for its use, moving the focus from punishment to improvement.

For the successful companies using data, they embrace two simple facts. One, the data is never perfect and that it doesn’t have to be to effect a positive change. Two, they’ve defined the level of granularity needed in the data to be used successfully.

How Imprecise Data is Changing the World

We see this approach in our personal lives. For example, the Fitbit device is not 100% accurate or precise. Yet, millions are changing their behavior of being more active because of the feedback that it provides. based on relatively decent data. You may also be carrying a smart phone, which also tracks your steps. Between the two, you would have a generally good idea of how many steps you took today.

From a granularity approach, we aren’t generally worried about whether I took 4103 steps or 4107 steps today. We took 4100 steps. Hundreds is our minimum granularity. It could easily be at the thousands level, as long as that granularity meets your information needs.

Cost Benefit of a Minimum Level of Granularity

One area we see this type of data accuracy dispute in the corporate world is with cost data. It’s been engrained in our psyche that we have to balance to the penny. Our default data granularity is set to the cent.

While that may improve accuracy and precision, it doesn’t make a material difference in the impact. For example, if your average project budget is $2M, then worrying about a 5 cent variance is a percentage variance of 0.0000025%. I’ve seen organizations who get wrapped up in balancing to the penny and waste an inordinate amount of time each week getting there.

Instead, let’s define a minimum granularity in the data such that a 1% variance is visible. For a $2M average, you would round up at the $10,000 point. Doing so then reduces work attempting to make the data perfect. Any variances of that size are significant enough to warrant attention and are more likely to stand out.

Implementing Self-Server BI using products like Microsoft Power BI and Marquee™ Project Dashboards will enable your organization to gain great improvements as long as they are willing to accept the assumptions above. The truth may make you miserable in the short term as you address underlying data and process challenges. In the long run, you and your company will be better served.

Please share your experiences in the comments below.