Recent Posts
-
Generative AI and Its Applications in the World of Financial Services
Introduction
With OpenAI’s ChatGPT gaining a 100 million users in first 2 months of launch, Generative AI has become the latest buzz word to dominate the corporate circles.
Google soon followed ChatGPT with Google Bard, and Microsoft has already announced the “future of search” by integrating GPT into Bing and Edge.
In simplest terms, Generative AI is a system with artificial intelligence that can be used to generate content (text, images, videos, audio, code, synthetic data, …). It is by no means new or sudden – it has been used for a while to enhance images and audio quality.
In essence, AI is a set of algorithms that enable an analysis of a large amount of data and interpret and respond to an incoming query. GPT is Generative Pre-Trained Transformer, like OpenAI’s GPT-3, or Google’s LaMDa. ChatGPT as an example has been trained on 45 Terabytes of data, equivalent to millions of volumes of books. Both ChatGPT and Google Bard can interpret questions asked in common language and respond with relevant information. In addition, they use Generative Adversarial Networks (GAN) to enhance / generate new information.
The technology has opened a lot of discussions on its use cases. While it is still early times, I am sharing my perspectives on what it could mean and how the sectors could start thinking about and prioritizing their investments in this space.
Potential applications to the financial services industries
I see primarily 3 main (potential) use cases in the banking, insurance and payments sectors.
- Fraud prevention: Generative AI can be used to generate large volumes of test data that caters to different combinations otherwise difficult to generate. This provides a much more robust testing strategy for products and enable tightening of rules to prevent fraud and flag anomalies. In addition, the generated test data can also be used to train an AI ML (machine learning) based fraud prevention module.
- Document processing: Financial institutions deal with a large number of documents, many for regulatory reasons. This technology can help automate document processing and reduce the number of errors. This also frees up employee time so they can be redeployed in more value creation. Finally, automated document processing means smaller lead times for communication, and hence improved customer experience. This can further lead to a better NPS and increased revenue.
- Onboarding/ loans: Generative AI also enhances the ability to process images and so can make the KYC process much smoother compared to today. A quick but robust KYC implies lower risk for the bank while offering an onboarding / loan decision within minutes, hence improving conversion rates (by reducing the chances of customers dropping out of the process) while reducing the human effort required in the process.
In addition to these 3 use cases, there are also 2 supplemental uses cases, by using Generative AI as an enhanced chat bot.
- Customer support: Chat bots are not new and have been discussed or used for over 5 years. IVRs are even older. In both systems (Chat bots/ IVRs), the user is guided through a decision tree to reach the right answer or get human assistance. With Generative AI, the quality of answers can be significantly enhanced. In addition, the number of steps required to reach the answer can also be reduced. This reduces the cost of customer support while providing an improved customer experience. A much-discussed example is offering financial planning advice to customers by analyzing their information, but the regulatory and privacy aspects of this use case remain unclear at the time of writing of this article.
- Knowledge management/ Staff training: Generative AI can also assist internal staff in finding information, addressing customer queries more efficiently, solving problems by accessing known solutions faster and in their onboarding or training. This can significantly improve the skill level of the employees to ensure consistent and high-quality customer service. This can also enable employees to upsell or cross-sell better leading to higher revenues. Finally, it should lead to higher retention rates in employees hence reducing the cost of HR processes (including hiring and exits).
Finally, there is also a use case as an enhanced research tool
- Market trends/ investment support: Generative AI can potentially also be used to interpret the impact of events around the world on potential investment decisions and hence help shape the risk profiles accordingly. The main value here will be in joining the dots in existing search results and providing relevant insights.
Risks and open questions
As with every young technology, there are several open questions that need to be addressed before the solutions could be industrialized. These range from regulatory (how do we ensure a fair use of data, how do laws like GDPR apply), confidentiality (how will the data be stored and used, will AI providers / competitors be able to see it), ethics and privacy. Does Generative AI offer sources to citations or will it encourage plagiarism?
Additionally, it requires a huge amount of computing power and data to run a system like this and be effective. How feasible will it be for “normal” companies to be able to run this, or will they be at a significant disadvantage vis-à-vis larger companies that can? Do companies use a common hosted service, and what happens if that service runs into issues or goes down?
There is some work to be done before this becomes mainstream, but needless to say, Generative AI promises a plethora of exciting innovations and disruption in the times to come.
-
Decomposing Complexity
Over the years, I have come across various software engineering projects that were deemed complex. When you are faced with complexity, the question we are inevitably faced with is if its worth pursuing the solution (will it yield the expected ROI), or should the idea be dropped and alternatives explored.
To start with, lets differentiate between complex and impossible. Complexity can be solved, given time and investment. If something is impossible, its because it depends on severe technical limitations that are not easy to solve in forseeable future. Technology evolves, and the impossible of today can move to being easy or complex in future. But, in this article, we will only look at problems that are complex - i.e., they can be solved with available means and technologies.
“The definition of genius is taking the complex and making it simple.” - Albert Einstein
As a word of caution, I do not aspire to provide any silver bullet answers here, but rather provide an understanding that enables you to ask the right questions in your context and come up with appropriate solutions.
So what makes something complex? I believe that complexity has 3 main dimensions.
-
It involves a lot of effort or investments: These are problems we know how to solve, but its just a lot of work or you can buy the solution but at a very high price. It is also context specific. What may be a lot of work for a start-up may be perfectly acceptable for a larger team.
In any case, this can be tackled with appropriate prioritization and scope management. It is much easier said than done though, and a lot of teams struggle to define that they really need in a Minimum Viable Product (MVP). In a recent example, I witnessed a MVP being built for 5 years (and its not ready yet). It is painful nevertheless vital to define the right scope of work.
In some cases, it may not be easy to prioritize, specially if it involves rewriting or replacing an existing system. In such cases, a side car approach may work better. Build a fake or parallel system, and start migrating topics one by one.
Obviously, a large effort can also be handled by adding more people to the team. However, larger teams need more processes than smaller teams and can sometimes be slower, so there is a fine balance to the maximum team size that will be effective in a certain context.
-
There is a huge risk of quality: This is a wider topic. Some things are easy to implement, but are too risk prone. The best way to work with such cases is to break down the problem into smaller components that reduce the risks as much as possible.
But doesn’t everything have a quality risk? Perhaps. It is important to understand what is acceptable risk in your context and educate your team on it. You do not need to safeguard against every possible risk, especially if the probability of it materializing in your context is small.
To understand this topic better, lets break it down further into 3 situations that lead to such risks.
a) There are a lot of unknowns: The solution requires components or technologies that the team is not fully familiar with, or some of the requirements can not be completely defined yet.
It is extremely important to map out all the unknowns and actively work towards reducing the ambiguity. It is also important to avoid the unknowns as much as possible and find reasonable solutions using unknown techniques. In most cases, if a requirement is unclear, it is unlikely to create a significant value for the overall solution.
Also, many times people try to solve problems by themselves and do not ask for help. It is important that teams discuss bottlenecks and complexities together, and try to find relevant expertise to help with the solution.
b) There are associated business risks: and the tolerance for errors is extremely small. Most of the times, the risks are overstated. It is important to understand the risks, and add metrics and measures to know when they materialize. These may require more than normal diligence and validations, and build mechanisms to mitigate these risks when they do materialize.
c) It is difficult to validate a solution: perhaps because it involves a lot of permutations of test data that is not easy to generate, or it addresses very special cases that are not easy to create on a test environment.
This type of complexity is the hardest to address, and requires a judgement based on the context. It is not impossible, it just needs the team to use pragmatic workarounds to generate confidence in the solution.
It is perfectly OK if the solution that comes out is not the most elegant one from a software engineering point of view, as long as it is easy to understand and maintain.
-
There are dependencies beyond our control: Many problems can not be solved by individual teams, but rather require a cooperation between many.
The key to success here is a clear mandate from the leadership team.
These can be of few types.
a) There are skills needed from different teams: This is primarily a staffing challenge as you need to assemble a team of individuals that have the right skills to deliver a solution. Co-located team members with a common guidance around the project’s objectives are most effective in such a setting.
b) There is a dependency on other teams/ providers to implement parts of the solution: This is more complex. Each team needs to prioritize the change according to the needs of the overall project, and requires a great degree of transparent and clear communication. Complexity increases many fold as the quantum of change required of each team increases. There are unfortunately no good answers in such a situation, but teams can work with defining interface contracts between their components, and building solutions against temporary stubs until the dependency is resolved.
c) There are process complexities: Many a times, the complexity in a project comes from process complexity. It is key to identify such complexities upfront and discuss relevance/ workarounds on a case-to-case basis. Many processes are a hangover of past issues, and should be re-evaluated from time to time.
To summarize, when you come across something complex, you can look at the following questions:
- Can I break down the problem into smaller more manageable problems and still get some value?
- Can I simplify the ask?
- Can the complexity be significantly reduced by a mandate from the leadership team?
I am curious to hear how you resolve complexities in your organization. Please share your thoughts in the comments below.
-
-
Modern Software Engineering - Part 3 - Designing the organization
Typical IT organizations have evolved into having multiple layers of managers. Some of that is because organizations try to reduce their risk by having more managers reviewing the work being done. Some is because the growth model only supports growth as managers, and hence everybody grows into a managerial role sooner or later, leading to a pyramid of people that are primarily in supervisory roles. Many organizations have as much as 50% staff in supervisory/ managerial roles. Simply speaking, only 50% of the staff is involved in actual production of software. Basic economics implies that typical overheads (or SG&A) in an organization should be about 20-25%. Shouldn’t the same logic apply to IT teams too?
Another aspect here is that complex organization structures lead to a lot of meetings that wastes productive time.
At the same time, there is the question of quality being delivered and the trust between different teams. Often, we see a “handover” mindset in most teams - they deliver their part, and then any issues found are to be fixed by the team that comes next in the chain. More often than not, the end-user’s perspective is ignored and forgotten, and teams focus more on covering their backs than on doing the right thing for the user.
Let’s look at all these aspects through various enabling mechanisms.
-
Aligned goals and metrics
A key aspect of ensuring quality in deliverables is that there is a common definition of quality across the organization. Most teams fail to recognize this, and we see different metrics being used by them. So, while a sales team might be tracking revenue, or customer service team might use Average Handling Time (AHT), the IT team enabling that might still be measuring the number of code releases, or bugs. Now clearly, there is much more than goes into enabling high revenue or low AHT than the software, and there are a lot of IT specific aspects developers need to care for, but that does not mean that the software developers do not have a view on these business metrics.
It is vital that everybody uses one language and common metrics across the organization. My most impactful stories have been from situations when my teams took the end-user view and partnered with the stakeholders to ensure that the end result was beautiful. Magic happens when developers and business teams collaborate on achieving common goals.
One simple example - we had a feature request to enable printing of VAT invoices for customers, and the developer on my team had already implemented it. However, he did not look happy. I walked up to him to find out why, and I saw him with a printout of an invoice and an envelope. He was upset that the printed customer’s address did not fit in the center of the address cut-out on the envelop. He did not have to do that test, but he went out of his way, fetched an envelope, printed and folded the invoice, and checked if it will work.
On the other hand, I was in a team for a large company whose main business was through online sales. Their website crashed, and it had been down for 2-3 days. We were parachuted in as external experts to rescue and fix. At 5 pm, the developers picked their bags and were leaving. We asked the lead developer if he could help debug the issue and he refused - it is the job of the support team and they need to manage it. Now it was late, so I get his point-of-view. However, in such a situation, I would expect an all-hands-on-the-deck type mindset.
The disconnect between software developers and business goals is sometimes shocking.
Most successful set ups are where all software teams have a business leader who is committed to enabling success and is not just a stakeholder. These business leaders also have sufficient say in the system, typically a direct line to company's leadership. And in such cases every software team is directly responsible for their impact on the business metrics.
There will be IT specific metrics that the developers need to track, but they also need to have a keen view on the business goals.
I recommend having large screen monitors (that show both business and IT metrics) next to where the developers sit, and I recommend that the teams include the business metrics in their performance reports at least once a month.
However, you do not need to over-engineer this. You do not need to track business value or cost per feature. A meta level view is just fine. The goal here is to establish better quality via ownership and awareness, and not to bring in an accounting overhead.
-
Product and platform, not project teams
Many organizations work in an outsourcing model even with their internal IT teams. The business team creates a project, gives it to the IT team, and then the IT team has the responsibility to deliver. As expected, this helps optimize the costs (maybe) but erodes quality and trust.
The issue here is that most organizations have one model for day-to-day functioning and for mentoring and reporting. This does not have to be.
It is important that organizations drop the notion of projects and move towards products. Now “product” has a specific connotation in most organizations - however, we are not talking about the product that you sell to your customer. We are talking about the “software product” that will enable that sale. Although you may sometimes align software product teams with actual products that will be sold to the customer.
The difference between a product and a project is that the latter has an end date. It is important that there are product teams that take an end-to-end view on a product, and not a tactical view on enabling a feature/ few features. This enables an improved view on quality and ownership in the teams. This also enables an easier way to align KPIs/ OKRs with the business teams.
An easy way to create product teams hence is to follow the business metrics and their responsible business leaders. So, sales may warrant a developer team, customer service might warrant another, and logistics might need yet another. All of them may warrant multiple teams, depending on the number of metrics and business leaders.
Another interesting tactic is to allow each business area to have a budget for software development and let them allocate it to each product team based on the latter’s performance in their QBR presentations. This drives collaboration between the business sponsors and the product teams.
When you have multiple product teams for a common business area (e.g., sales), you just need all product owners to collaborate with the same business responsible person.
-
Your organization structure does not need to reflect your IT architecture
Many IT teams adopt a n-tier architecture, which is composed of different layers. Many of them model their organizations to align with the architecture too - there is a frontend team, a middleware team, a backend team, etc. etc. This leads to a large number of dependencies (and bottlenecks) across teams, and also a lack of end-to-end ownership.
In my experience, the most effective model is when organization structure does not replicate the IT architecture. In such cases, there are product teams with end-to-end responsibilities, and platform teams that enable the product teams with tools and frameworks.
The platform teams, or as we alternatively call them - IT-for-IT, are deeply technical teams that develop tools and frameworks. Think of these teams as R&D or enabling teams, whose customers are the product teams, and whose primary responsibility is to bring in efficiency and innovation. These are extremely important, and the product owners for these teams need to directly report into the IT leaders.
Although we call these platform teams, they should not be centered around specific technical tools, e.g., a Salesforce team, or a SAP team. Salesforce experts, or SAP experts, should be embedded in the right product team.
In some cases, the work required is too much to be handled within one “full stack” team. In such cases, there are 2 options, viz., a) take thinner slices of work so a lean team with end-to-end responsibility can still work, or b) divide the teams based on 1-2 layers such that they still have a business significance (e.g., one team does everything until API-enablement, and other prepares frontend and integrates the APIs). The second option is less preferred, and as much as possible, end-to-end ownership should be ensured.
-
More pigs than chicken
You need more people that have their skin in the game than those that are just supervisors or advisors. My typical assessment works on the following lines:
- Anybody who is not actively building or maintaining a product, nor takes an active part in defining the requirements, is an overhead. This includes all advisory roles - security, privacy, architecture, coaches, etc. etc.
- Anybody spending more that 50% of their time in meetings is an overhead
- The total number of overhead roles should be less than 25% of the total organization. So, if the IT team is 100 people, at least 75 of them must be actively building the product
A simple way to start is to de-layer the organization. A product owner should have a direct reporting to the business leader responsible for that area, and all developers work directly with the product owner and the tech lead, and all tech leads work directly with the IT leader (CIO/ CTO/ VP/ ...). Cut down on all other managerial layers, and clearly define roles and responsibilities for every role
Ensure that the Product Owner comes from business team’s perspective and is responsible for writing clear requirements, and for verifying the implementation, and the Tech Lead is a senior developer with >80% time dedicated to coding, and remaining time for mentoring the team.
Automate all non-value adding tasks, and simplify what cannot be automated, e.g., coordinator functions, where someone is only responsible to raise a ticket or act as a SPOC for communication. Another example is replacement of manual QA work with automated tests as much as possible.
As an example, all advisory roles could be staffed on product teams as needed and would be expected to have an acceptable utilization rate.
Typically, such an exercise frees up between 15-20% of capacity that can then be reallocated to value adding roles. The freed-up people are also very talented people in wrong roles, and normally >95% of them can be reallocated (and will be interested) for further value creation. Some might need a bit of training and investing into them brings out magic. Congratulations, you just created a significant productivity boost (through saving and reallocating).
At the same time, as a word of caution, do not go overboard with this idea. Many of the advisory teams are often understaffed and underappreciated. In some cases, having SPOCs helps product owners and business leaders to maintain their sanity, especially when it comes to managing vendor relationships. You may still need some manual QA. Similarly, all organizations do require managers, so trying to move towards a near zero managerial capacity will be an absolute disaster. While it is important to chart out an ideal picture, it is also important to then apply a prudent lens and ensure that the model will work in your context.
A study at Google indicated that the most effective teams are the ones where team members feel psychological safety and have structure and clarity. I recommend keeping this as the underlying thought when designing the organization.
-
2 pizza box teams
This concept came from Amazon and is almost an industry standard now. The idea is that the team is small enough to have a healthy collaboration and can work together as a SWAT team to deliver towards a common goal. My recipe for typical teams is: 1 Product Owner, 1 Designer, 1 Tech Lead, 4-5 Developers, 1 QA, and 1 Advisor. The designer and advisor role may be fulfilled by different people at different points in time of a product release, based on the need. E.g., there may be a UI designer at 50% and UX designer at 50%, or 50% of architect, 20% of security, and 30% of Subject Matter Experts/ coaches. Some of these may be shared across different teams. So, there are 7-8 dedicated team members, and 2 that are floating. The reason why I would count the floating also into the team is because these need to be in the stand-ups and need to be accountable for the quality of delivery (i.e., they need to be pigs, and not chicken).
In special cases, depending on the complexity and (lack of) maturity of the organization, some teams may also have a Business Analyst/ Junior Product Owner, someone that helps the product owner by taking up some of their responsibilities.
-
Functional vs Reporting structures
One important clarification to be made here. Everything above talks about how the teams should operate, and not where they should report. The IT team members should continue to report into the IT leaders, so that their career growth, learning and mentoring can be shaped by leaders that understand the field.
The product teams should have a dotted line reporting to the business leaders, and the feedback on their performance should be evaluated based on their performance in that context.
Another thing to note here is that this does not mean that the IT leaders report into their business counterparts. Both IT and business leaders need to have a top level reporting into the company leadership. This is necessary to ensure that the organization does not always prioritize tactical goals over technical excellence and innovation.
This model ensures that the business leaders do not need to worry about the mentorship of technical teams, and the teams get guidance and support from leaders that understand the space. At the same time, the technical teams are focused on generating business value for the organization.
-
Chapters, or communities of practice
A final missing piece here is knowledge sharing. It is important that teams share their work for 3 reasons:
- It enables consistency of implementation across the organization. People have an ability to challenge each other every time they spot an inconsistency. This in turn helps with cost optimizations via prevention of fragmentation and avoidance of duplicate costs
- It enables learning within a community of similarly skilled colleagues
- It helps identify training needs for specific skills
Spotify has Guilds and Chapters; many other organizations have communities of practice. It is vital to encourage creation of similar virtual structures and ensure that they are exchanging knowledge on a regular basis. So, the community needs to appoint a leader, and that leader should regularly share their observations with the IT leaders. Note that this is not a dedicated role, but an additional responsibility for an existing team member.
This has an interesting side-effect: it enables a different growth model in IT compared to traditional ones. Developers can remain developers and still grow (in responsibilities and financial sense) without taking up managerial roles.
As always, there is not just one answer for organization structures. Different models work for different set ups, and it is important to understand the context you operate in, and what works in that context. Similarly, the size of an organization can play an important role in defining the feasibility of some of these measures. What works for a 50-member team may not work for a 5000-member organization. Finally, culture and team maturity play an important part in defining the model. At the same time, the principles remain broadly the same, and as long as one can define an execution model that works in their context, it will enable a significant productivity and quality boost in the output.
So how do we solve for large organizations? Well, for one, there are a number of standard frameworks and methodologies. I hear SAFe is the most famous. I am personally uncomfortable with any “one-size-fits-all” solutions, so I would recommend evaluating the options based on your context and devising an execution mechanic that works for your organization.
Finally, at the heart of all these tips is the intent to simplify (reduce complexity). Anything that increases overheads or complexity in the long term must be challenged and re-evaluated for fit in your context.
-
Older Posts
-
Modern Software Engineering - Part 2 - Maximizing developer experience and writing high quality software
-
Modern Software Engineering - Part 1. Defining a strategy for success
-
Modern Software Engineering - Introduction
-
Picking up the pen again
-
Migrating to Jekyll
-
Building a web app with node js
-
Don't write off Microsoft just yet!
-
Line Charts with d3 js
-
Why Clojure scares me
-
New Features on Factile
-
Factile - A free online survey tool
-
Deploy Play 2 application on AWS with Tomcat and Apache HTTPD
-
Generating Excel in Play 2
-
What happened to Slate?
-
An IDE for Scala
-
Lets Play with Scala
-
Do you Git it ?
-
Java Regex and the Dollar Sign
-
Reading Excel in Rails
-
Some Quick Tips
-
Implementing Pagination in Rails
-
Ruby, JQuery and AJAX
-
Getting started with Ruby
-
Unit Testing with HSQL
-
WSRP with Weblogic Portal
-
Unit Testing Using EasyMock
-
Need for Stub!
-
Drools!
subscribe via RSS