Into the Cloud Conclusion – Securing Applications

Introduction

Applications need to be developed primarily with security in mind.  A lot of things are involved but I will focus on two things in this blog post:

  1. We need to know which actors are using our system.
  2. We need to expose features only to users that need them, and have privilege to access them.

The first one deals with authentication – granting a user some sort of trusted credential (like an ID card). The second one deals with authorization – granting that credential a set of entitlements.

Architecture

In the diagram below, we will be authenticating users through Amazon Cognito. Cognito will be configured to authenticate through three methods:

  1. Using local amazon user pools (users signing up though our app)
  2. Using facebook (users signing in through their facebook login)
  3. Through google (users accessing the app with their google account).

Security architecture showing integration with Amazon Cognito using facebook and google identity providers

Standards

Like I stated in my earlier posts, it is important to design a security architecture that leverages standards so that it can be interoperable with other systems. We used the OpenID protocol. This made it easier to integrate using Angular Auth OIDC Client for the user interface and Spring Security Oauth2 library for the backend.

Authentication

How can we trust that the token passed to the backend service comes from the user instead of a malicious client? How do we identify the user from the token? How do we ensure that the right token is delivered to the client for use?  How do we ensure that users have a good user experience when logging in to the application? These are the questions you need to ask yourself when designing an authentication system.

Gaining access to the System and Single Sign On (SSO)

We implemented authorization code grant flow using (PKCE) on the angular user interface. The authorization code grant type flow redirects login to the authorization provider (Amazon Cognito in this case). The login and signup forms are served by Amazon Cognito. This means that the user’s sign in and signup experience can be customized by changing the Cognito configuration. A few things can be achieved:

  1. Sign-in though external identity providers like Facebook, Google and other OpenID providers
  2. Sign-In through SAML providers.
  3. Enabling multi factor authentication according to the security requirements of your application.
  4. Configuring the session properties like access token validity period, session time, validity of refresh tokens and so on.
  5. Determining what user info is needed
  6. Styling the sign up and login pages.

Signing in through external identity providers help you to login to multiple applications without signing in. For example, once I have logged in to my google account, by clicking the google login, I do not need to enter my user name and password again. This approach provides a seamless way for users to access applications.

One of the major security vulnerabilities of the authorization code grant flow is that the code returned by the authorization server can be intercepted and a malicious actor can use that code to obtain an access token on behalf of the user. This is because public clients are normally used with authorization grant type and no client secrets are used. A common way to mitigate this is using Proof of Key for Code Exchange (PKCE). Most OpenID clients (including angular-auth-oidc-client) and authorization providers (in this case Cognito) support this. This method prevents interception because:

  1. Before the redirect is made to the authorization provider, the client generates two cryptographically related tokens: the code challenge and the code verifier.
  2. Only the client knows these token pairs and their relationship can be validated using cryptographic algorithms.
  3. The client adds the code challenge to the authorization request and the authorization server keeps track of the code challenge by associating it with the request along with the generated authorization code.
  4. The client is required to pass the authorization code and the code verifier in order to obtain the access token. Since only the client knows the verifier, no malicious user can obtain the token without the verifier.
  5. The authorization server validates the relationship between the challenge and the verifier using the selected cryptographic algorithms.

Benefits of using OpenID

In this application, the front end and backend strictly integrates with Amazon Cognito using the OpenID protocol. Because of this:

  1. I can reuse established open id libraries without re-inventing the wheel
  2. I have a solid foundation on the security flow since the protocol is well documented
  3. I can switch authentication providers and use others like Keycloak or Okta without implementation code changes.

Proof of Identity

Once the client makes a token request with a valid authorization code and code verifier, Amazon Cognito issues three tokens:

  1. An access token
  2. An identity token
  3. A refresh token.

These tokens are cryptographically signed by Cognito using public-key cryptography as per the OpenID standard using the JWT specification. In this specification:

  1. The authorization server maintains a key pair. A private key (kept secret by the authorization server) that it uses to sign generated tokens and a public key that it exposes for resource servers to retrieve.
  2. The client attaches the access token to a request.
  3. The resource server validates the request by validating the signature of the token using the public key.

In short, proof of identity is established based on trust. If the resource server can confirm that the token comes from the authorization server, then it trusts that information. The token is a JWT token that contains claims that identify the client. One of those claims is the username claim that contains the user id of the user.

Authorization

Once the identity of the actor has been established, the system needs to know the capabilities of the actor. What is this user allowed to do on this system? This is implemented in different ways in different systems. There are two aspects of authorization:

  1. Obtaining the user entitlements
  2. Enforcing the user entitlements

Both aspects can be taken care of by the same, or different authorization systems. Once practice I find useful is to de-couple authentication from authorization because you will, most likely, not use the same solution for both and you do not want to be tied down to a solution because you cannot isolate them in your logic.

In my case, I am only using Cognito for authentication. Cognito has a native integration with Amazon Verified Permissions which handles the above two aspects through authorization policy configuration and enforcement. Because I isolated both in my design, I am free to start with a much simpler authorization system using database based role-based-access control. In future, if I want to use something more elaborate like the Amazon Verified Permissions, I can easily integrate.

Like I said in the begining of the Authorization section, authorization behaviours fall into two categories:

  1. Those that handle 1 and 2 together, all you have to do is simply ask the authorization server the question “does the user have access to this resource?”
  2. Those that handle 1 and expect that you handle 2. They provide you with the list of the user’s entitlements and it is up to you to enforce that.

I am currently using the second approach, retrieving the user’s entitlement from the database.

Backend Authorization

Being a spring boot service, authentication and authorization are handled using spring security. I implemented a custom granted authority resolver. This is an implementation of spring’s Converter interface that converts a JWT token to a collection of granted authorities.

public class AppJwtGrantedAuthorityResolver implements Converter< Jwt , Collection<GrantedAuthority> > {

@Override
publicCollection<GrantedAuthority>convert(Jwtsource){
…
 }
}

}
 

Spring security already has APIs that help you to enforce access based on what entitlements the user has. So I can configure things like:

  1. Users can only have access to the GET ‘/hymns’ endpoint if they have the ‘hymns.view’ permission.
  2. Users can only have access to the POST ‘/hymns’ endpoint if they have the ‘hymns.create’ permission.

Front-end Authorization

Front-end authorization follows the same approach as backend but in this case, instead of restricting apis, we restrict the following:

  1. Access to pages on the app
  2. Visibility of components
  3. Functionality (enabling or disabling) of certain features or components.

These can be achieved in angular by:

  1. Implementing route guards
  2. Using entitlement based directives
  3. Using conditional statements applied to templates.

NOTE: It is advisable to implement both server and client size authorization.

Back to Amazon Verified Permissions

Amazon Verified Permissions is a user authorization service newly introduced by amazon that enables you to define flexible authorization rules that specify when and how much access a user has. These rules are defined using the open source Cedar language for specifying access control.

Like I said earlier in this post, as a resource server, you ask Amazon Verified Permissions if a user has access to the intended request. This service takes that authorization request, passes it though a set of rules you defined using the cedar language, and responds with an ALLOW or REJECT response.

This approach is extremely flexible and allows you to specify access rules based on any user and request criteria. Application teams can configure their access requirements in a manner that helps them efficiently manage access controls.

One drawback is that calls has to be made to the authorization API for every request and this can be costly and inefficient. This can be mitigated by:

  1. Caching the authorization decisions
  2. Batching authorization requests for multiple functionalities using the batch authorization API.

Secret Management

I cannot conclude without talking briefly about how to manage secrets. Database and private key credentials need to be stored securely to avoid malicious users gaining access to your system. The AWS Systems Manager provides a parameter store that enables you to store different parameters including secrets. This stores secrets as SecureStrings that are encrypted using the default AWS managed encryption key or a customer managed encryption key that you can manage. Encryption keys can be created using the Amazon Key Management Service.

Using the default AWS managed key is free, but you can only access the secret from within the account, using AWS APIs. If you need more flexibility and you want to share secrets with multiple AWS accounts, you will need to create a customer managed key, and of course, this is not free.

The EC2 Journey Unfolds – Routing Traffic

Introduction

In my earlier article I discussed my experiences with the Amazon EC2 instance. I also talked about the VPNs and security groups that are needed to set up the network at which the EC2 virtual machine resides. Today, I will detail my approaches to getting network traffic to the virtual machine.

Routing Architecture

Routing Architecture

In my simple scenario, I am using the same EC2 instance to host my Nginx router (which serves my Angular resources for the front-end) and the Spring Boot service which runs the Hymns API. The Nginx server listens to port X and the spring boot service listens to port Y. Both service use the HTTP protocol.

Target Groups

Target groups are what AWS uses to know where to route traffic to. After AWS determines the IP address, it need to know which port and protocol to use to send the data packets. This is what a target group provides.

The Big Picture

I plan to build many applications in future, as well as APIs that other developers can use to collaborate with. I will be using mariarch.com to serve web applications and pages, and mariach.io to serve collaboration APIs. Although the routing solution I will start with will be simple, I will have to bear this in mind so that my system can evolve appropriately.

The routing big picture. This shows how user interface and backend traffic will be routed.

Flow for API Traffic

  1. Client makes an API call
  2. The request reaches an API gateway
  3. The API gateway routes the requests based on the sub-domain to the required target group
  4. The request reaches the target group.

Flow for User Interface Traffic

  1. Client loads a page on the browser
  2. Requests from the browser are received by a load balancer
  3. The load balancer routes the request based on domain to the required target group
  4. Target groups define the port and protocol that an Nginx server listens to
  5. An Nginx server can be configured to host multiple applications and will contain rules to serve the correct asset based on the subdomain used in the request
  6. The Nginx server returns the requested access.

Api Gateways with Amazon API Gateway

This is by far the most feature backed approach to take and also, not the simplest to configure. AWS provides two categories of HTTP traffic routing that you can use: The HTTP api and the REST api. I was initially confused as to the difference between these two offerings and, on studying the documentation of these features I got to understand their peculiarities. I will briefly summarize them below.

HTTP APIs only provide basic routing and you can create APIs based on region. You can configure these apis to use Mutual TLS authentication (HTTPS), and authorize using IAM, amazon cognito, custom authorization using AWS lambda functions and JWT tokens.  You can also use custom domains on these APIS.

Rest APIs build on all the features of HTTP APIs and they can be edge-optimized and private. You can set up these apis to use certificate authentication or authenticate using AWS’s custom WAF (Web Application Firewall). You can also configure resource polices to achieve fine grained authorization. Also, you can create API keys that your API clients will authenticate with and enforce rate limiting and throttling per client. Please see the developer guide for more details about these two offerings.

Typical AWS Api Gateway Configuration

Typical AWS API Gateway configuration showing how the API gateway routes traffic to a target group through a network load balancer.

The typical way to configure AWS API Gateway is through a network load balancer. AWS provides a detailed guide on how to do this and I happened to follow the same with no issues.

Simple API Gateway Using Application Load Balancers

Using the AWS API gateway will surely give you the most flexibility and will enable you to achieve much more without writing code. But this requires an extra component which will eventually add to your overall costs. If you don’t require the features that the API gateway provides (like me at this point who only wants a simple routing by sub domain name), you can simply use an application load balancer. It enables you to specify basic routing rules. In my case, I am using the application load balancer to route traffic by domain name.

Routing traffic using application load balancers.

Network versus Application Load Balancers

AWS provides different categories of Load Balancers of which application and network load balancers are the recommended and most popular. Network load balancers route traffic at the network level (i.e it uses protocols like TCP and UDP). This means that the rules you specify here are network related. Application load balancers, on the other hand, route traffic on an application level (using protocols like HTTP, HTTPS, e.t.c). This enables you to route based on HTTP related criteria like path, headers e.t.c.

Transport Layer Security (TLS) and Certificates

Users will not trust you when you do not use https (I mean, I get skeptical when I navigate to a site without https). Even worse are HTTPS sites that are flagged with the dreaded “certificate invalid” or  “site not trusted” messages. These messages are related to the certificate used to establish the secure connections.

AWS API Gateways and load balancers allow you to listen to secure endpoints by associating a certificate with them. AWS gives you an easy way to create and manage public certificates. These certificates are signed by AWS certificate authority and will show up as a legitimately signed certificate (check hymns.mariarch.com which was signed by a public AWS certificate). Certificates provided by AWS are managed using AWS certificate manager. This services allows you to create public certificates for free and will not charge you for using public certificates. You can decide to associate multiple domain names to a certificate or use wildcards. Please note that when creating certificates with AWS certificate manager, you will be required to validate the sub-domains associated with those certificates. Again, AWS provides a detailed, step-by-step, process on how to do this which involves associating a few domain records to your domain. If your domain is managed by AWS using Route53, it’s much more easier.

One thing to note using AWS public certificates is that:

  1. They are signed by AWS Certificate Authority
  2. You can only access the certificate details
  3. You can not access the private key of the certificate.

This effectively means that you can not use these certificate to secure any component apart from those that are natively integrated to AWS certificate management (i.e API gateways and load balancers). I learned this the hard way when trying to route traffic directly to my EC2 instance (you don’t have access to private keys of these certificates to configure any deployed service like nginx for http).

AWS Private certificates give you much more control and freedom to use any Certificate Authority of your choice. It also gives you access to the private keys as well. And, as you guessed it, it is not free. Please see the AWS documentation for certificate management for more information.

Routing Traffic Directly to EC2 Instances

Like you know, this is the simplest approach because it does not require any extra component from AWS. We just need to:

  1. Add network rules to the security group to permit traffic from the IP addresses you want ( you can permit all, but this is discouraged)
  2. Run a service that listens to a particular port. The default port for http is 80, that for https is 443 (If you want your urls to be without a port, you should use these defaults).
  3. To use TLS, you will need to use an AWS private certificate or obtain a certificate from an external provider.

This option, on its own, is discouraged because that the network interfaces of the EC2 instances are exposed directly. However, this can be a viable option of you already have routing providers outside of AWS. In this case, your network rules should only allow traffic coming from these routing providers.

So interesting right? Now that I have looked at routing traffic, I will circle back to securing APIs using Amazon Cognito in the next post.

Into The AWS Cloud

Innovation is a constant phenomenon in or world today. In order to keep up with the speed of innovation businesses must be able to adapt quickly. One of the factors restricting businesses from evolving is IT infrastructure.

IT infrastructure comprises of everthing a business needs to survive in this digital age like email providers, servers, media storage, processing units, softwares, routers and so on. Businesses that manage their own IT infrastructure have to dedicate a percentage of their workforce to manage and evolve this infrastructure as the business grows. This task is especially cumbersome for lean businesses without the capacity to dedicate staff for this purpose.

With the advent of “the cloud”, more and more businessess choose to “outsource” all or part of their IT infrastructure so that it is managed by a separate company. This company ensures that computing and network resources are available and shared efficiently. These “cloud” providers now provide exact billing strategies that ensure that you only pay for what you use (as opposed to investing in a 16GB ram server and only use it for emails). These days, we find that it is more cost effective especially for lean businesses to move to cloud architecture and focuse on their core competencies.

In the following Sections, I will delve into the AWS cloud, exploring and reviewing the services they provide and the practical use cases for small businesses.

Do you want me to review or check out any aspect of AWS for you? Feel free to comment and let me know or contact me.

The Holy Architect

Introduction

The title sounds like a far-fetched ideal. (Just like the caption of this blog post generated from Microsoft’s Copilot, powered by DALL.E 3) For some, it is inconceivable to associate two words from seemingly different domains of life. Most times, people associate holiness to saints and people devoted to their religion. In our world today, and for some people, it is inconceivable, or at least impractical, to achieve the zenith of our career and still be devoted to our religion. For others, because we are at the zenith of our career and live in comfort, we see no need to embark on such a journey.

Misconceptions about Holiness

What thought crosses your mind when you hear that someone is holy? The person spends an inordinate amount of time in a religious institution? The person prays a lot? The person always talks about spiritual things and reads a lot of spiritual books? Or perhaps the person is frequently mentioned in religious circles. Some people go as far as to think that because these people spend so much time devoted to their religion, they possible cannot have a thriving career.

It is easy to think and postulate about matters we have little or no experience with and it is often easy to judge people by external appearances. But by careful consideration of the concept of holiness, one would realize that the topic is much more profound and vital to everyday living.

What is Holiness?

The oxford dictionary give us something that is not entirely helpful – “The state of being Holy”. With “Holy” meaning “dedicated or consecrated to God or a religious purpose; sacred”. In the simplest term, it is state of realizing that you are meant for something and you give your whole self for that purpose.  One who recognizes that his life has meaning does well to find and fulfil that meaning to which he found. And like any other endeavor in life which requires skills, there are some skills that are required for you to faithfully achieve your life’s goal. I’m not talking about career skills this time, because our lives are much greater than our careers. I am talking about fundamental skills – virtues.

The Pursuit of Virtue can be applied to every aspect of our lives

I recently came across a book titled “The Art of Living” by Dr Edward Sri. It made me aware that basic virtues are needed in all aspects of our lives, even in our careers. After that, I attended some project management courses, and to my amazement, these same concepts (albeit talked about limitedly in different lexicon) were discussed as fundamental to managing teams, individuals and one’s self. I will try to explain how virtues are vital with a story.

I have so much experience in software development and people usually encourage me to write about what I know and I actually know that it will help others if I do so. With the virtue of prudence, I am able to evaluate the benefits of documenting my knowledge. This virtue also makes it possible to listen to my peers and take this advice. Also, looking back at my career, I have benefitted from the mentorship and guidance of  so many great individuals, and used numerous resources (I cannot imagine my career without google and stackoverflow). The virtue of justice makes me understand that because I have received so much, I ought to give back the same way I received.

Knowing that I need to write to help others is not enough. This knowledge does not actually help anyone. I’ve known this, but because I lacked two other fundamental virtues, I have not been able to successfully and continuously document my experiences in a manner that will truly inspire and help others. With the virtue of fortitude, I would be able to persevere in my writings even when the topic I am writing on seems difficult; able to be patient and continue my blogs even when my kids distract me, or I am heartbroken, or grieving the death of a loved one; with fortitude, I can be free to dedicate my effort to writing great pieces that would inspire, inform and educate my readers and not settle for mediocre blog posts. With the virtue of temperance, I will be able to overcome distractions that get in the way of my completing this blog posts like food, amusement, sex and so on ( which of their own, are not bad). With this virtue, I will also avoid excusing myself from writing my blog post because I have another (still legitimate) responsibility.

When I am able to write blog posts easily, consistently, promptly and with joy, then I have attained a level of perfection what will ensure that my efforts truly make a difference to people’s lives. Note that this perfection does not concern itself with the work in its nature but to the manner at which the work is done. Attaining this level of perfection, through the study of these virtues and the practice of attaining these virtues, in every aspect of our lives including our careers is the basis of holiness.

The Virtuous Architect

There can never be a perfect software architect. But that shouldn’t stop us from studying and practicing to be. If we set our aim to the cosmos and fall short of it by landing on the sky, then we have done something exceptional. However, if we tell ourselves: “we cannot attain the cosmos” and settle for a “more realistic” goal as the sky, we might find ourselves falling smack on the ground. And like all life’s endeavors, the cardinal virtues – prudence, justice, temperance and fortitude – are the basic skills every software architect needs to design useful systems, collaborate with clients and stakeholders to own their vision and work with the team to make this vision a reality.

Prudent Architecture

Prudence generally means applying the right reason to things. Prudence helps the architect determine the right course of action based on the given context. This context can be gotten from the careful consideration of the data and metrics surrounding that endeavor – for example telemetry data for performance enhancement , business capability use cases and statistics for feature elucidation, or bug report data and tests for bug fixing. Experiences should also be sought from best practices. This can be in the form of external global standards, or internal company adopted standards. Standards not only ensure inter-operability, but encourage architects to rely and build upon the knowledge and wisdom of their predecessors and counterparts.

Listening and seeking advice from mentors and other professionals who have had similar experiences is also key to making the right decision. No matter how knowledgeable you think you are, you have a lot more to learn. This realization is a major catalyst for your growth, not only as a software architect, but as a person. Common ways to seek advice is to participate in software development communities like stackoverflow; searching for answers in search engines; talking to someone you know has more experience; or even talking to your peers. You might be surprised that by just talking, you find the solution to your problem.

A prudent architect carefully collects data – being mindful of, and guarding against, biases in data collection. It is so easy for us humans to be biased and not even know about it. Last year, when  I wanted to get a car, I began to notice cars similar to the one I was about to purchase. Weird right?

Once we have carefully considered the options and data involved, we need to make a decision. We can be prudent architects and avoid making decisions based on impulse and feelings and take time to consider all information. But by delaying decisions or being indecisive, we end up undoing what we have done and causing harm. A prudent architect makes a decision in a timely manner after weighing the context surrounding it. And after making that decision, the architect sticks to it because he knows that the decision was carefully made considering the available context. This does not mean that the architect is obstinate or does not update his position based on new information or data. Because we are all imperfect beings, we should always be open to correction.

There are common reasons that make us indecisive. The most common one is anxiety. We are worried about the future and whether this solution will stand the test of time. To this I answer that no temporal thing in this world is worth being anxious about. What’s the worst that could happen? Critically think about it. Secondly, as architects, especially in agile development, we don’t expect to deliver the perfect solution in the first iteration. As long as we are honest with ourselves; listen to our stakeholders and our team; and keep track of improvements and feedback, we will surely get the job done.

Responsibilities of an Architect

The virtue of justice makes us aware of our responsibilities and what we owe others. We owe our client transparency and honesty. We need to be able to listen to and provide solutions to the concerns of stakeholders. We need to be accountable to our managers and ensure that we do everything in our power to execute quality deliverables in a timely manner. We owe our team mates guidance and leadership and we need to be a rock for our team in times of crisis and uncertainty. This virtue motivates us and drives us to be excellent individuals. We often feel disillusioned or discouraged because we have lost sight of the true reason we do things. The virtue of justice pins the true essence of our actions on our foreheads that we may not lose sight of the true reason for our endeavors.

Navigating through Troubled Waters

Now we have the developed the best architecture. We are able to answer all the questions our stakeholders throw at us. We present our architecture to the governance team and they applaud. Awesome! Then what next? What happens when it’s time to implement and we meet a few hiccups? We see that things don’t go according to plan and stakeholders become worried and start questioning our abilities. What happens when the realization that the implementation is taking longer than expected and we receive immense pressure from management? In my circle we call these “fires”.

It’s in these common situations that we need perseverance which is an aspect of fortitude. With perseverance, we can wade the muddy waters of difficulty calmly, think clearly and come up with solutions, under pressure, that will get you and your team to the finish line. With perseverance we can objectively work at the solution to a problem and seek to solving the root cause instead of taking the short cut or the easy way out.

Constancy is the third aspect of fortitude that enables us to continue working to solve a problem and not abandon it for a lesser or more trivial task. We have all faced that situation where we have a critical bug to fix or some part of the architecture to figure it out, and because it is too difficult at the time we tell ourselves “oh let me go and do PR reviews”.

Sometimes, the work is not difficult and the challenges we face are internal. We feel disappointed because we were just passed off for a promotion; we feel heart broken when a loved one leaves us; or grief when someone we love dies. We can also feel anxious about being a husband, wife or father and so on. A lot of factors, apart from the nature of the task itself, can contribute, and make the execution of our tasks difficult and the virtue of patience helps us carry these burdens effectively and achieve great things despite them.

And when I mention great things, I do mean them. This is because magnanimity, the fourth aspect of fortitude helps us aspire for greatness in everything we do. A magnanimous architect is not just content with implementing something that “works”, but designing a system that is truly beautiful, magnificent, scalable and efficient given the resources at his disposal.

Avoiding Distractions

Anything that will prevent us from achieving our task as software architects are distractions and should be avoided. It doesn’t matter if they are good or evil. The virtue of temperance helps us regulate our passions in order to focus on what is truly important.

One of such common passions as software architects is that of novelty – the desire to try new things and work with new tools. Seeking the latest technological advancements and the latest trends in the software development world. Seeking the best tool, within reason is the prudent approach as discussed, but seeking them for the sake of having them in our system, yields to unnecessary change and introduces unwanted behaviors in or system.

The Proud and Vain Architect

It is amazing what the vices of pride and vanity can do to a software architect. A proud architect believes that he is better than he actually is and above everyone else in his team. A humble architect knows what he truly is in relation to his team and stakeholders and strives for greatness given that knowledge. A vain architect does everything for his own glory.  A humble architect seeks progress of the project, and the team and the satisfaction of stakeholders. Because a proud architect thinks himself better than others, he refuses to listen, spends a lot of time talking in team meetings and makes everything about himself.  The proud architect does the team a great harm because he causes discord and strive in the team and goes through great length to impose his will on the team. More importantly the proud architect does himself a great harm by shutting himself from the team and depriving himself of the opportunity to grow and learn. Finally, because a vain architect seeks his own glory, he is afraid of making mistakes, he is always worried about his image and tries to make everyone happy instead of doing the right thing.

It is important to understand that a humble architect is not indecisive or timid or shy. In fact, because a software architect knows what he truly is – a human being with faults, a team member, an advisor to stakeholders – he knows that he can make mistakes and is not afraid to do so. He knows that he can say wrong things from time to time and so, he has confidence to talk because he is willing to accept corrections. He is not indecisive and anxious but full of great confidence because he knows that although he is a lowly human being, he places his trust in The One Who Knows.

Blog at WordPress.com.

Up ↑