OpenAPI has become the de facto standard of designing web APIs, and there are numerous tools developed around its ecosystem. In this article, I will demonstrate the workflow of using OpenAPI in both backend and frontend projects.
API Server
There are code first and design first approaches when using OpenAPI, and here we go with code first approach, i.e. writing the API server first, add specification to the method docs, then generate the final OpenAPI specification. The API server will be developed with Python Flask framework and apispec library with marshmallow extension. Let’s first install the dependencies:
Bootstrap V5 and Vue 3.x have been released for a while, but the widely used BootstrapVue library is still based on Bootstrap V4 and Vue 2.x. A new version of BootstrapVue is under development, and there is an alternative project BootstrapVue 3 in alpha version. However, since Bootstrap is mainly a CSS framework, and it has dropped jQuery dependency in V5, it is not that difficult to integrate into a Vue 3.x project on your own. In this article, we will go through the steps of creating such a project.
Create Vite project
The recommended way of using Vue 3.x is with Vite. Install yarn and create from the vue-ts template:
1 2 3 4
yarn create vite bootstrap-vue3 --template vue-ts cd bootstrap-vue3 yarn install yarn dev
Add Bootstrap dependencies
Bootstrap is published on npm, and it has an extra dependency Popper, so let’s install them both:
1
yarn add bootstrap @popperjs/core
You may also need the type definitions:
1
yarn add -D @types/bootstrap
Use Bootstrap CSS
Just add a line to your App.vue file and you are free to use Bootstrap CSS:
Kubernetes is the trending container orchestration system that can be used to host various applications from web services to data processing jobs. Applications are packaged in self-contained, yet light-weight containers, and we declare how they should be deployed, how they scale, and how they expose as services. Flink is also a trending distributed computing framework that can run on a variety of platforms, including Kubernetes. Combining them will bring us robust and scalable deployments of data processing jobs, and more safely Flink can share a Kubernetes cluster with other services.
When deploying Flink on Kubernetes, there are two options, session cluster and job cluster. Session cluster is like running a standalone Flink cluster on k8s that can accept multiple jobs and is suitable for short running tasks or ad-hoc queries. Job cluster, on the other hand, deploys a full set of Flink cluster for each individual job. We build container image for each job, and provide it with dedicated resources, so that jobs have less chance interfering with other, and can scale out independently. So this article will illustrate how to run a Flink job cluster on Kubernetes, the steps are:
Compile and package the Flink job jar.
Build a Docker image containing the Flink runtime and the job jar.
Create a Kubernetes Job for Flink JobManager.
Create a Kubernetes Service for this Job.
Create a Kubernetes Deployment for Flink TaskManagers.
Enable Flink JobManager HA with ZooKeeper.
Correctly stop and resume Flink job with SavePoint facility.
Apache Hive introduced transactions since version 0.13 to fully support ACID semantics on Hive table, including INSERT/UPDATE/DELETE/MERGE statements, streaming data ingestion, etc. In Hive 3.0, this feature is further improved by optimizing the underlying data file structure, reducing constraints on table scheme, and supporting predicate push down and vectorized query. Examples and setup can be found on Hive wiki and other tutorials, while this article will focus on how transactional table is saved on HDFS, and take a closer look at the read-write process.
File Structure
Insert Data
1 2 3 4 5 6 7
CREATETABLE employee (id int, name string, salary int) STORED AS ORC TBLPROPERTIES ('transactional'='true');
The schema of this folder’s name is delta_minWID_maxWID_stmtID, i.e. “delta” prefix, transactional writes’ range (minimum and maximum write ID), and statement ID. In detail:
All INSERT statements will create a delta directory. UPDATE statement will also create delta directory right after a delete directory. delete directory is prefixed with “delete_delta”.
Hive will assign a globally unique ID for every transaction, both read and write. For transactional writes like INSERT and DELETE, it will also assign a table-wise unique ID, a.k.a. a write ID. The write ID range will be encoded in the delta and delete directory names.
Statement ID is used when multiple writes into the same table happen in one transaction.
Apache Flink is another popular big data processing framework, which differs from Apache Spark in that Flink uses stream processing to mimic batch processing and provides sub-second latency along with exactly-once semantics. One of its use cases is to build a real-time data pipeline, move and transform data between different stores. This article will show you how to build such an application, and explain how Flink guarantees its correctness.
Demo ETL Application
Let us build a project that extracts data from Kafka and loads them into HDFS. The result files should be stored in bucketed directories according to event time. Source messages are encoded in JSON, and the event time is stored as timestamp. Samples are:
From Spark 1.3, the team introduced a data source API to help quickly integrating various input formats with Spark SQL. But eventually this version of API became insufficient and the team needed to add a lot of internal codes to provide more efficient solutions for Spark SQL data sources. So in Spark 2.3, the second version of data source API is out, which is supposed to overcome the limitations of the previous version. In this article, I will demonstrate how to implement custom data source for Spark SQL in both V1 and V2 API, to help understanding their differences and the new API’s advantages.
A RelationProvider defines a class that can create a relational data source for Spark SQL to manipulate with. It can initialize itself with provided options, such as file path or authentication. BaseRelation is used to define the data schema, which can be loaded from database, Parquet file, or specified by the user. This class also needs to mix-in one of the Scan traits, implements the buildScan method, and returns an RDD.
Sink is the last component of Apache Flume data flow, and it is used to output data into storages like local files, HDFS, ElasticSearch, etc. In this article, I will illustrate how Flume’s HDFS sink works, by analyzing its source code with diagrams.
Sink Component Lifecycle
In the previous article, we learnt that every Flume component implements LifecycleAware interface, and is started and monitored by LifecycleSupervisor. Sink component is not directly invoked by this supervisor, but wrapped in SinkRunner and SinkProcessor classes. Flume supports three different sink processors, to connect channel and sinks in different semantics. But here we only consider the DefaultSinkProcessor, that accepts only one sink, and we will skip the concept of sink group as well.
NullPointerException happens when you dereference a possible null object without checking it. It’s a common exception that every Java programmer may encounter in daily work. There’re several strategies that can help us avoid this exception, making our codes more robust. In this article, I will list both traditional ways and those with tools and new features introduced by recent version of Java.
Runtime Check
The most obvious way is to use if (obj == null) to check every variable you need to use, either from function argument, return value, or instance field. When you receive a null object, you can throw a different, more informative exception like IllegalArgumentException. There are some library functions that can make this process easier, like Objects#requireNonNull:
1 2 3 4
publicvoidtestObjects(Object arg) { Objectchecked= Objects.requireNonNull(arg, "arg must not be null"); checked.toString(); }
Or use Guava’s Preconditions package, which provides all kinds of arguments checking facilities:
1 2 3 4
publicvoidtestGuava(Object arg) { Objectchecked= Preconditions.checkNotNull(arg, "%s must not be null", "arg"); checked.toString(); }
We can also let Lombok generate the check for us, which will throw a more meaningful NullPointerException:
When using ESLint React plugin, you may find a rule called jsx-no-bind. It prevents you from using .bind or arrow function in a JSX prop. For instance, ESLint will complain about the arrow function in the onClick prop.
There’re two reasons why this rule is introduced. First, a new function will be created on every render call, which may increase the frequency of garbage collection. Second, it will disable the pure rendering process, i.e. when you’re using a PureComponent, or implement the shouldComponentUpdate method by yourself with identity comparison, a new function object in the props will cause unnecessary re-render of the component.
But some people argue that these two reasons are not solid enough to enforce this rule on all projects, especially when the solutions will introduce more codes and decrease readability. In Airbnb ESLint preset, the team only bans the usage of .bind, but allows arrow function in both props and refs. I did some googling, and was convinced that this rule is not quite necessary. Someone says it’s premature optimization, and you should measure before you optimize. I agree with that. In the following sections, I will illustrate how arrow function would affect the pure component, what solutions we can use, and talk a little bit about React rendering internals.