Implementing a web application over time
HTML/Javascript
First there was HTML where styles were attributes of the HTML tag and we used Javascript to create simple dynamic components such as a timestamp that updated every second. A web client (a browser) would request a HTML page from a web server using HTTP and a URL, the web server (normally running Apache or NGNX or Tomcat or similar server software) would send the HTML and Javascript files back to the web client to be displayed. The contents of those files were interpreted by the web browser software running on the client and displayed to the user.
HTML/CSS/Javascript
Next, we separated the styles into a separate file and referred to the components we wanted to style. This led to very cool sites like CSS Zen Garden where the same HTML could be themed to look much different just by using a different CSS file containing different styles.
Server Based Pages
A web client (the browser) calls a webserver which sent back HTML, CSS and Javscript files. Using the simple web servers these files were just served from a directory on the web server hardware.
Server based languages (such as PHP,ASP) were introduced on the server. The web client asked for a PHP file and could pass arguments with the request in the HTTP header; the PHP language built into the web server intercepted the call and sent back HTML, CSS and Javascript files. But the PHP language interpreter could put anything into these files at runtime. This allowed dynamic pages to show database information.
Dynamic Web / Web 2.0
The problem with server based pages is the whole page had to be refreshed to get database info to be shown. The web 2.0 allowed AJAX calls to be made to the server and for the server just to send data back to the web client which then used Javascript callbacks to dynamically display that data. This made web based applications functionally comparable to desktop applications. The AJAX calls would typically use a REST interface to request data that was returned in the JSON format.
Client Server
Typical web server applications were split into client server components. The client is the web software (HTML, CSS, Javascript) that sends requests to the server. The server is typically a load-balanced API driven program which returns data to satisfy the client request. The HTTP protocol used for all web development allows for different types of calls. A POST or PUT call almost always updates data, whereas a GET call almost always returns data and does not affect the database.
Model View Controller and Frameworks
Different programming methodologies are introduced from time to time and MVC was a methodology for laying out your code into a view, a controller and a model. This allowed code to be more structured for bigger systems. Most languages also had frameworks that could be used for application development that forced developers on teams to follow certain programming rules such as putting code for controllers in certain folders and code for views in different folders.
Client Side Development
For a large part of my career HTML, CSS and Javascript were enough to be a good client side developer and jQuery was the framework used everywhere. The client side was where UI design was a large part of the work, ADA requirements had to be followed, internationalism had to be implemented, multiple skins for the same system could be applied to the same back-end code. Nowadays 3 frameworks have emerged as the most used - ReactJS, AngularJS and VueJS. Often a front-end engineer will only know HTML, CSS, Javascript and one of the frameworks and that is enough to get a very well paid job. I would encourage people to also learn native Javascript and jQuery, if only to study how well jQuery code is constructed. A lot of client side development is showing results from a query sent to the server, so understanding grids, dynamic screen sizes, different outputs for screens and printers, graphing etc. are essential skills for developers to know.
Server Side Development
Server side development is largely API driven and involves receiving an API request and updating or querying the data store to affect the data in the system. Data to be updated will be validated. Data read from the store will be returned in a certain format the front-end can process. The requested for a server may be a web client, but it may also be a cronjob or trigger from an external interface. A cronjob may request the server to generate data for an external interface or read data an external interface has sent to our application.
The server may have hundreds of clients and may need to process thousands of concurrent requests. It may be OLTP based where quick storage of data is critical, or DSS where quick selection of data is critical. It is often true that a system needs both capabilities so enhanced warehousing and summarization of data is part of the system. For a server to process so many requests it needs to be load balanced. This means there are multiple copies of the server programs running and a router or ingress controller sending each request to the next available server. In smaller system this complicated stateful communication where the same client sent multiple requests each with a small amount of data that was stored on the server - each subsequent request from the same client had to go to the same server . Nowadays it is best to write fully stateless applications so any instance of a server program can process the next request.
API driven public servers
As more API driven servers were being developed there was a push to make a lot of normally hidden data available to the public. For example Google released APIs for accessing map data used in Google maps, MLS property data was released via an API, census data was released via an API, flight arrival data was released via an API, Yahoo had several data strems available via an API and even produced a SQL like language for processing them.
Some of this data was free to the public and some involved a fee but most agreed making this data available could only be a good thing and new applications could use this data in interesting ways. Most of the APIs were REST interfaces utilizing JSON formatted data but XML data (the standard befre JSON for EDI type applications) was also common.
Mashups and Service Orientated Architecture
Many applications run on the internet. A new type of information based application called a mashup was created which took information from several public APIs and merged them together into a rick application. One example of this was Zillow which combined MLS data with Google Maps data to show houses for sale in a new easy to use graphical interface. These mashup applications became very popular and the public APIs started being referred to as services. A program that used multiple services or an enterprise that created multiple services internally for internal applications to use started to be referred to as a Service Orientated Architecture (SOA). The idea was that each service provided an API to retrieve data from an internal store and that data could be used to build better apps. For example one company I worked at had an Oracle system and a SQL Server system that never talked to each other until they build APIs and defined them as services, then built richer more functional internal systems using the new combined data that was exposed.
MicroServices
The micro service terminology was borne out of the popularity of service orientated architecture. If an internal app was broken up into microservices or smaller components that would make the overall app more efficient and solve some technical problems that internal applications have typically faced.
As long as a micro service had a public API, it did not matter what language it was coded in. As long as it had its own tables and own code based it could be maintained and released without having to release the whole system, The service could even be developed by different teams in different locations as long as the API was strictly implemented. Even better, the service could run on the cloud (either public or private).
Of course, everything has its own caveats and micro services are no different. Most systems dont need them, many systems implemented them incorrectly and they add complexity where complexity never used to exist. However they do allow systems to scale very quickly and combined with dynamic server creation on the cloud allow for managed growth far more efficiently than before.
Comments
Post a Comment