![]() ![]() You still get the detailed timing of what took how much. The rollback at the end allows running this without really modifying the database. Instead of really running that query you can do begin ĭelete from mydata where id='897b4dde-6a0d-4159-91e6-88e84519e6b6' Let's say that the slow query would look like delete from mydata where id='897b4dde-6a0d-4159-91e6-88e84519e6b6' For this you need to find at minimum a single query that does complete but takes longer than expected. ![]() The easiest method to solve the problem is to query detailed timing from the PostgreSQL: EXPLAIN. Would it be much faster if I wrote a script that will browse the dependent tables, starting at the table furthest from the central table, deleting the dependent rows table by table?Īn important detail is that there are triggers on some of the tables. But what if I need to repeat this operation later when the DB is live and much larger? Are there alternative, faster methods? It was no surprise that this would take a long time but after 12 hours it became clear that I was better off starting over, dropping the DB and launching the migration again. All concerned foreign keys are marked ON DELETE CASCADE. The DB had reached about 1.5 GB (it's expected to grow to several 10s GB within weeks) when I had to do a bulk delete in a very central table in the main schema. I just started filling the database with some real data taken from the previous version. The client schemas have foreign keys referencing the main schema and not the other way around. ![]() I have a database on PostgreSQL 9.2 that has a main schema with around 70 tables and a variable number of identically structured per-client schemas of 30 tables each. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |