I don't know how to contribute to your project (I am used to github), anyway we have huge database with many thousand projects and we are facing performance issues mainly due to the number of sql queries executed. I am auditing the application and I saw indexes that could be improved. For example on the parameter table you don't take the full advantage of MySQL indexes on multiple columns. I suggest you to merge the parameterProject and parameterUser in a unique index with this patch:ALTER TABLE parameter DROP INDEX parameterProject;ALTER TABLE parameter DROP INDEX parameterUser;ALTER TABLE parameter ADD INDEX parameterUserProject (idUser, idProject);The advantage is that the queries such as select * from parameter where parameter.idUser is null and parameter.idProject is null would use a non unique key lookup instead of merging indexes at each execution. In our testing enviroment, the queries are now executed below 1ms instead of 15ms and we are saving 100ms on saveDataToSession.php script with this fix only.
Good idea.
Morover, the idProject is not used on this table, so possibly the column (and index could be removed).
But if we keep the columns, merging indexes will be good.
Ticket #5136 recorded
I found out that projreqtor can trace all SQL queries by setting trace level to 3 and add $debugQuery=true; into the paramaters file.
I developed this PHP script gist.github.com/bbalet/52ed04ec0ff6ef1a0b6b6fbd74393b22
It cleans the log, reshapes the queries and compute some statistics in a CSV file in order to priorize the audit on indexes.
By the way I'm not sure about what you want this code in model/persistence/Sql.php to achieve
debugTraceLog(round((microtime(true) - $startMicroTime)*1000000)/1000000 . ";" . $sqlRequest);
I had to multiple the time from the log by 1000 in order to get values that make sense.
Thanks for sharing.
But why not just use the "slow_query_log " directive in MySql ?
It will automatically trace queries longer that the "long_query_time" and write it in the "slow_query_log_file"
I had to multiple the time from the log by 1000 in order to get values that make sense.
Sure, values in the log are in Milliseconds, as most queries long just few milliseconds
I want to achieve a different objective:
1 - Get the frequency of a type of query
2 - Get the mean execution time
3 - Optionally detect outliers and remove them for more accurate statistics
slow_query_log variable in MySql simply log queries above a threshold.
The problem is that they can be false positive due for example to a lock or temporary overload of the db server.
Another point is that you need to prioritize the queries that you want to improve.
A slow query that is not very often used doesn't necessarily need to be optimized.
I forward you an example of output and later on I will open new threads for code improvement.
Thanks.
Yes your approach is good.
Just one question about your result file (I did not examine code generating this file yet) : are the value in seconds or in milliseconds ?
En poursuivant votre navigation, vous acceptez le dépôt de cookies tiers destinés au bon fonctionnement et à la sécurisation du site (gestion de session, reCaptcha) et à une analyse statistique anonymisée des accès sur notre site (Google Analytics). Si vous vous inscrivez, les informations que vous fournirez ne seront jamais divulguées à un tiers sous quelque forme que ce soit. En savoir plus
Cookies settings
×
Functional Cookies
Ce site utilise des cookies pour assurer son bon fonctionnement et ne peuvent pas être désactivés de nos systèmes. Nous ne les utilisons pas à des fins publicitaires. Si ces cookies sont bloqués, certaines parties du site ne pourront pas fonctionner.
Session
Please login to see yours activities!
Other cookies
Ce site web utilise un certain nombre de cookies pour gérer, par exemple, les sessions utilisateurs.