Using Generative AI in Software Project Management to Bridge Domains and Accelerate Productivity
Key Takeaways
- AI assistants are great productivity tools for experienced software professionals who are working at the edge of their familiarity and expertise.
- They can help synthesize and derive insights from the industry or organization-specific content required to define an effective software solution.
- With AI assistants, you can bridge gaps between domains of expertise, i.e., business stakeholders and engineers, governance, and business stakeholders, because it can quickly translate one perspective and set of domain-specific terminology into another.
- AI assistants are handy in documenting code both externally and internally. The major large parameter models, ChatGPT, Anthropic Claude, Meta Llama3, and others, are trained on code and can recognize and describe patterns in code, enabling them to quickly suggest human-readable explanations of what code is doing.
- AI assistants should be used ethically with practical consideration for privacy, energy consumption, transparency, and the quality of the workplace our tools create for people.
I have 25 years of experience developing software and leading teams and organizations. I transitioned back to product work and coding this year, which coincided with the ready availability of generative AI assistants built using Large Language Models (LLMs) such as Claude3, ChatGPT, Llama2, and MistralAI. Their timely availability has been invaluable to me.
Gen AI Assistants play to the strengths of professionals with a breadth of experience, particularly software developers who can describe what they want the LLM to complete and critically evaluate the result. These tools enable us to swiftly cross divides of domain language and scale large repetitive tasks down to interesting ones on a human scale. When used carefully, they even facilitate work fundamentally about human interaction.
In this article, I’ll focus on three such activities and how AI assistance helps me better use of people’s time and work with them to achieve results faster:
- Learning and discovery
- Articulating and understanding requirements
- Maintaining alignment with stakeholders
Learning and Discovery
Building effective software entails coming up to speed quickly with written content: conversation threads, manuals, specifications, support issues, and the code itself. Then, being able to synthesize and communicate insights from that context.
I take full advantage of AI assistants built into document collaboration tools and through chat interfaces to cut through domain-specific language: technical, industry, or regulatory.
In this example, I used a threaded LLM chat using a larger model like ChatGPT 4 or Claude 3 Opus to help me extract support themes from a compilation of customer trouble tickets. I did this to understand customer pain points and the organization’s challenges in addressing them. I exported a list of 150 ticket summaries and attached them to an LLM chat.
There are some 30 prompts in this chat thread from which I could extract types of issues, summarize and group issues by type, and combine them with an understanding of the application flow. From this, I built a sequence diagram and identified places in that sequence where customers encountered issues.
I understand the agent is fallible, has incomplete information, tries to obey my prompts regardless, and can lie. So, I take responsibility for my understanding and validate with subject matter experts. In this case, I will walk through the sequence diagram with the engineers and ask for their feedback on my assumptions and areas of interest.
This exercise was accomplished in half a day, which would have taken several days to absorb and draw insights from source documentation. Bringing this work to the team meant they could focus on clarification and correction rather than repeating prior conversations.
Less tangibly, what can feel like a solitary, monotonous exercise becomes a dialog that immediately begins putting text on the page, helping me engage and generate insights.
Articulating and understanding requirements
I joined an active project as a product owner/manager and found the backlog as it existed was providing the team scope but not priorities within that scope or a clearly described business value for each feature, and therefore, enough information to exercise discretion on how to implement them.
Instead, engineers were working from product requirements definitions (PRDs) written by subject matter experts, who read instructions on how to build each feature written by non-developers. This is a common enough situation. When engineers are given tasks without business objectives, they find it hard to use their creative problem-solving skills and experience to design the most effective, maintainable, and extensible solution possible within the constraints of time and resources.
What results is the engineering group wanting to step back to understand, which feels like a time waste to stakeholders. The stakeholders want the engineers to get on with it, which feels disempowering and risky to them.
This can be as much a problem of translation and obfuscation as it needs more information. The relevant context may be inferable or embedded in the existing documentation if you can draw it out and translate it from one domain to another. Bridging this divide is a fantastic use of generative AI.
In this case, I used Notion to turn this and other PRDs into a provisional backlog of work. In this example, I imported the 10-page PRD pdf into a notion document and began asking questions:
The descriptions of work the LLM extracts what the engineering group needs from the PRD in language that better addresses their needs. My prompts guide which themes are raised up and elaborated upon.
Because the LLM has trained on the publicly available understanding of a user story, it knows how to construct a completion that looks and sounds like a story. The completion can contain fabrications (i.e., it fills in gaps with something that sounds right but is not true). I review the stories. I amend them as I need to. And, most importantly, I review them with the stakeholders and engineering team, making any corrections they deem fit.
So, again, this doesn’t replace the human process. But, by taking documents created from the stakeholder’s point of view, using an LLM to quickly translate that to an engineering point of view. Then, verifying with both parties saves me hours. The stakeholders can see their work has been used. The engineers understand what they are being asked to build.
I could do this without AI assistance. I have the requisite experience. But, using an LLM speeds up the process. I can focus on critical thinking and questions. I can focus on interesting details.
Maintaining alignment with stakeholders
I still take handwritten meeting notes. It helps me focus and improves my recall. I even use a favorite fountain pen because that’s fun. However, I also use auto transcription and summarization, whether built into video conferencing or as third-party services that attend the video conference as participants, disclosing to all parties that the meeting is being recorded. I currently prefer these third-party solutions.
When helpful, I use summarization to create drafts of our status updates every week using our backlog and notes. AI assistants built into document management platforms like Notion provide summarization as a menu option. General AI chat tools summarize when prompted to do so.
I heavily rewrite the key points and then distribute them to the team and project stakeholders. They form the outline of our in-person check-ins. Using assistance reduces this to a 30-minute effort where I can focus on insights rather than the more rote act of summarization.
Responsible use of AI technology
At no point does using an AI assistant diminish an individual’s accountability and responsibility for the work products they create.
As professionals, we must consider the privacy or business sensitivity of the content we expose to LLM Agents. Be mindful of the terms of use and privacy policies of your services. Opt out of sharing of information for training purposes where available and necessary. Use local models that will not leak information or cloud services that retain information within a secure perimeter when it makes sense.
Be mindful of this technology’s power consumption, compute cycles, and costs. Do what you can as an individual and member of an organization to reduce wasted cycles. For example, develop with lower fidelity or local models. Don’t perform expensive operations like fine tuning if you can use pre-trained models with attached files or Retrieval Augmented Generation (RAG) instead.
Use AI to improve the quality of life and increase the productivity of human workers rather than with an intent to replace them.
For example, I built my first production RAG application to solve a business problem for a team that was not asking for AI. They were a support team with one very experienced lead and two members new to this specific support responsibility. RAG became part of an answer to enable the team to get answers from the existing sources without having to hunt for them or, as was most usually the case, ask the lead. This freed the lead to focus on resolving customer issues and helping answer novel questions from the rest of the team. The lead was delighted when I paired with him to hand over access to the tool. The tool also provides a way to edit and save answers into a canonical runbook. We intend it to be a living document capturing expert-curated troubleshooting responses to the most common, most painful customer problems.
Disclose the role machine-built content has played in helping you devise or author your code and writing. This enables others to hold you accountable for this content’s fair use, accuracy, and provenance.
Conclusion
Generative AI is a tremendous asset for experienced software professionals when used mindfully. It can help us capture, summarize, and interrogate large amounts of content and quickly translate it from one perspective and domain-specific set of terms to another. Doing this reduces monotony and rework. I rely heavily on them to speed up my context learning, reduce monotony, and increase my output. That helps me perform in support of stakeholders and teams. It reduces friction between different participants in a software development lifecycle and increases the enjoyment I receive from my work.
Attribution: In this article, I used LLM agents extensively in examples. I also used an LLM to suggest takeaways from my writing. However, I did not use LLMs to write any article text.