Posts

Java Annotations

  Annotations are metadata bound to elements of the source code of a program and have no effect on the operation of the code they operate. Their typical uses cases are: Information for the compiler  – with annotations, the compiler can detect errors or suppress warnings Compile-time and deployment-time processing  – software tools can process annotations and generate code, configuration files, etc. Runtime processing  – annotations can be examined at runtime to customize the behavior of a program There are several annotations in the  java.lang  and  java.lang.annotation  packages, the more common ones include but not limited to: @Override –  marks that a method is meant to override an element declared in a superclass. If it fails to override the method correctly, the compiler will issue an error @Deprecated  – indicates that element is deprecated and should not be used. The compiler will issue a warning if the program uses a method, clas...

Spring boot - overview

Image
  Describe the flow of HTTPS requests through the Spring Boot application? On a high-level spring boot application follow the MVC pattern which is depicted in the below flow diagram. Spring Boot Flow Architecture 24. What is the difference between RequestMapping and GetMapping? RequestMapping can be used with GET, POST, PUT, and many other request methods using the method attribute on the annotation. Whereas getMapping is only an extension of RequestMapping which helps you to improve on clarity on request. 25. What is the use of Profiles in spring boot? While developing the application we deal with multiple environments such as dev, QA, Prod, and each environment requires a different configuration. For eg., we might be using an embedded H2 database for dev but for prod, we might have proprietary Oracle or DB2. Even if DBMS is the same across the environment, the URLs will be different. To make this easy and clean, Spring has the provision of Profiles to keep the separate configurat...

Data Pipeline Design Patterns

Image
ETL Extract-Transform-Load (ETL) as shown in figure 2 is the most widely used data pipeline pattern. From the early 1990’s it was the de facto standard to integrate data into a data warehouse, and it continues to be a common pattern for data warehousing, data lakes, operational data stores, and master data hubs. Data is extracted from a data store such as an operational database, then transformed to cleanse, standardize, and integrate before loading into a target database. ETL processing is executed as scheduled batch processing, and data latency is inherent in batch processing. Mini-batch and micro-batch processing help to reduce data latency but zero-latency ETL is not practical. ETL works well when complex data transformations are required. It is especially well-suited for data integration when all data sources are not ready at the same time. As each individual source is ready, the data source is extracted independently of other sources. When all source data extracts are complete, p...

Core principles of API-first development

The core principles of API-first development With so many developers using the term API-first incorrectly (or perhaps partially correctly), it can be difficult to know which products are genuinely API-first and which ones aren’t. It’s because of this murky definition that we’ve collected five core principles of an API-first development approach.  1. Your API is a product Publishing an API is easy. What’s difficult is preparing it for public consumption. That’s the difference between merely creating APIs and treating them as products. An API-first approach requires you to think about how developers will interact with your API, how you’ll educate them on its functionality, how you’ll maintain it over time, which tools to use to build the API, and how you’ll adhere to standards of compatibility, security, and simplicity. When a company builds a product, it must meet industry standards. For APIs, this means practicing foundational software design and development cycles that deliver a q...

How Do HTTP Requests Work?

  How Do HTTP Requests Work? HTTP requests work as the intermediary transportation method between a client/application and a server. The client submits an HTTP request to the server, and after internalizing the message, the server sends back a response. The response contains status information about the request. What Are the Various Types of HTTP Request Methods? GET GET is used to retrieve and request data from a specified resource in a server. GET is one of the most popular HTTP request techniques. In simple words, the GET method is used to retrieve whatever information is identified by the Request-URL. Read more about GET . HEAD The HEAD technique requests a reaction that is similar to that of GET request, but doesn’t have a message-body in the response. The HEAD request method is useful in recovering meta-data that is written according to the headers, without transferring the entire content. The technique is commonly used when testing hypertext links for accessibility, validity...

Request and response flow of multi-tiered architectures

Types of N-Tier Architectures There are different types of N-Tier Architectures, like  3-tier Architecture, 2-Tier Architecture and 1- Tier Architecture. First, we will see 3-tier Architecture, which is very important. 3-Tier Architecture By looking at the below diagram, you can easily identify that  3-tier architecture  has three different layers. Presentation layer Business Logic layer Database layer These three layers can be further subdivided into different sub-layers depending on the requirements. Some of the popular sites who have applied this architecture are MakeMyTrip.com Sales Force enterprise application Indian Railways – IRCTC Amazon.com, etc.   Some common terms to remember, so as to understand the concept more clearly. Distributed Network:  It is a network architecture, where the components located at network computers coordinate and communicate their actions only by passing messages. It is a collection of multiple systems situated at different nod...

gRPC - Future of Complex API services

gRPC (gRPC Remote Procedure Calls) is an open source remote procedure call (RPC) system initially developed at Google in 2015. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many languages. Most common usage scenarios include connecting services in a microservices style architecture, or connecting mobile device clients to backend services. gRPC's complex use of HTTP/2 makes it impossible to implement a gRPC client in the browser, instead requiring a proxy Reference  https://developers.google.com/protocol-buffers/docs/javatutorial