3 Things You Might Not Know About PostgreSQL | by Dmytro Khmelenko | Feb, 2022

Advance your data concerning the in style database

PostgreSQL is a well-liked relational database following SQL compliance. After the preliminary launch again in 1996 it retains attracting many digital companies and builders. Being an open-source venture permits everybody to undergo the code and contribute to it. It demonstrates good efficiency and its benchmarks are capable of compete with different databases resembling MySQL, SQL Server, and Oracle Database.

SQL queries are standardized throughout all database administration methods. They may have some minor variations, however that’s not vital. What’s extra attention-grabbing for engineers to know is that Postgres has one thing past simply storing the info. That may convey some insights to the group concerning the info they work on.

When your system operates on an enormous scale, you may be curious to grasp what dimension of information you’ve collected. We wish to perceive what’s the dimension of the database, a particular desk, or perhaps a sure column. Such info is efficacious in understanding if the area is getting used effectively and to foretell any scaling occasions.

Postgres presents a great set of instructions associated to dimension. For instance, to get the dimensions of your entire database we are able to use the command pg_database_size(). It’s going to print out the dimensions of the precise database in bytes. The command pg_table_size() shows the dimensions in bytes of a sure desk within the present database excluding indexes. The command pg_total_relation_size() does virtually the identical however consists of the area wanted for indexes. And if we wish to know the way a lot area is required for a concrete column we are able to use the command pg_column_size().

To run these instructions we have to go to the Postgres console by working psql in a terminal. We additionally might use third-party instruments resembling DataGrip from Jetbrains or related. As soon as we’re within the console, we are able to run SQL queries together with these instructions.

SELECT pg_database_size('my_database');

The question from the above prints out the next output.


If you really feel uncomfortable changing bytes to KB or MB, there’s a devoted command exists. Wrapping the dimensions with the command pg_size_pretty() does readable formatting.

SELECT pg_size_pretty(pg_database_size('my_database')); pg_size_pretty
8065 kB

The total checklist of these instructions is on the market in official documentation that’s expressive sufficient.

Postgres has an inbuilt system for amassing completely different statistics. These statistics can embrace the variety of accesses to sure tables, most up-to-date queries, variety of knowledge modified by queries in a specified database. These statistics most likely should not one thing to look in on each day foundation. Nonetheless, in a second of analyzing the efficiency of the database and discovering insights, that’s the first place to leap in.

As statistic is being collected, every little thing that we have to do is barely to question it. We will write a daily SQL question to fetch particular info. For instance, let’s fetch the variety of rows inserted, up to date, and deleted for each desk in our database. This info is current in a view pg_stat_user_tables.

SELECT relname, n_tup_ins, n_tup_upd, n_tup_del FROM pg_stat_user_tables;

The question from the above returns the checklist of inserted, up to date, and deleted rows for each desk within the present database. In my case, the output appears to be like like this.

 relname     | n_tup_ins | n_tup_upd | n_tup_del
customers | 51 | 0 | 0
funds | 128 | 6 | 0
profiles | 139 | 0 | 0
guides | 801 | 0 | 8

Personally, I had to make use of it as soon as when common SQL didn’t work for me. My problem was in counting the variety of customers within the desk. The story was big, various hundred million. Because of this, the assertion COUNT was completely timing out. Fortunately, utilizing the statistics view it turned attainable. The next question returns an estimated variety of rows for the desk Customers from my database.

SELECT n_live_tup FROM pg_stat_user_tables WHERE relname = 'customers';

The n_live_tup represents an estimated variety of rows for every desk within the database and relname exhibits the title of each desk. The view pg_stat_user_tables comprises the info concerning the customers’ tables excluding any system tables.

You will discover essentially the most attention-grabbing statistics within the following views:

  • pg_stat_database– database-wide statistic;
  • pg_stat_all_tables– statistics about all tables within the present database;
  • pg_stat_activity– statistics associated to the queries and processes;

The total checklist of all of the views and their columns is on the market within the official documentation. Test it out to find how your database performs.

Did you ever ask your self what it takes to run a question? What number of rows will likely be affected? What are the prices of working that question? SQL has a really highly effective command EXPLAIN that Postgres helps in addition to different databases. How does it work?

After we run the assertion EXPLAIN it examines the next question. It’s going to test all operations for working it like sequential scan, index, and bitmap index scans. When the question is extra complicated together with joins, then much more scans will likely be carried out. The command EXPLAIN in that case information all of them and prepares for the show.

Seq Scan on customers (value=0.00..10.50 rows=50 width=1388)

Right here we analyzed the easy SELECT question to retrieve all information from the Customers desk. The outcome we see is the Question Plan. It comprises solely a single entry that could be a sequential scan on the Customers desk. Together with that, we’re capable of see the estimated prices of working the question, the variety of rows returned, and the typical width of rows.

This command could be very useful relating to troubleshooting and figuring out bottlenecks in complicated queries. You will get an in depth question plan and simply uncover problematic statements. And as it’s out there in lots of different databases, it turns into a particularly useful instrument for each software program engineer.

More Posts