Is this MariaDB index creation locking table? What lock?
I like to add a simple no primary index to an InnoDB table on MariaDB-5.5.68 (CentOS) in production.
I found no reliable information if the following ALTER TABLE is read or write locking the table or not. As it will run for a few minutes (50GB database), I want to find out if I need to shut down the service or not.
ALTER TABLE tbllog ADD INDEX idx_LogDate(logdate);
Best would be, if no locking will happen.
I already found the LOCK=NONE
option, but no information about if this is working in MariaDB 5.5.68 :-(
do you know?
how many words do you know
See also questions close to this topic
-
In Mongo, If a document I'm saving "Prateek" then I don't want on the next create operation even the "prateek" or "praTEEK", etc is saved
//** If I'm adding a new document with the name: "India", then I don't want that the DB allow another name with the name: "INDIA", "india", "indIA", etc. I'm new and learning, help would be great!!**
// Controller
var Dinosaur = require('../models/dinosaurs'); //addDino module.exports.addDino = (req, res) => { var name = req.body.name; var type = req.body.type; var height = req.body.height; var weight = req.body.weight; var Period = req.body.Period; req.checkBody('name', 'Name is required').notEmpty(); var errors = req.validationErrors(); if (errors) return res.status(400).send({ message: 'Name is Required' }); else { let newDino = { name: name, type: type, height: height, weight: weight, Period: Period } Dinosaur.addDino(newDino, (err, result) => { if (err) { if (err.name) return res.status(409).send({ message: name + ' Already Exist' }); else if (err.url) return res.json({ status: false, error: { url: "Url already exist" }, message: err.url }); else return res.json(err, "Server Error"); } else { return res.status(200).send({ message: "Done" }); } }); } }
// Model
var mongoose = require('mongoose'); //dinosaur schema var DinosaurSchema = mongoose.Schema({ name: { type: String, unique: true }, type: { type: String }, height: { type: Number }, weight: { type: Number }, Period: { type: String } }); var Dinosaur = mongoose.model('dinosaur', DinosaurSchema); //add module.exports.addDino = (query, callback) => { Dinosaur.create(query, callback); }
// GetAll, Already Created a new document with the name "Brachiosaurus"
// > Create, a new create with the first letter lower case "brachiosaurus", Don't want it to be pushed.
-
Update column values based on another dataframe's index
I have the following dataframes:
NUMS = ['1', '2', '3', '4', '5'] LETTERS = ['a', 'b', 'c'] df1 = pd.DataFrame(index=NUMS, columns=LETTERS) a b c 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 NaN NaN NaN 5 NaN NaN NaN df2 = pd.DataFrame([['tom', 10], ['nick', 15], ['james', 14]], index=LETTERS, columns=['col', 'col2']) col col2 a tom 10 b nick 15 c james 14
I'm trying to update
df1
withdf2
so that if the column matches the index fromdf2
, all rows are updated withcol2
:a b c 1 10 15 14 2 10 15 14 3 10 15 14 4 10 15 14 5 10 15 14
I've tried
df1.update(df2['col2'])
, butdf1
does not update.I've also tried
df1.apply(lambda x: df2['col2'].loc[x])
, but I'm getting the following error:KeyError: "None of [Float64Index([nan, nan, nan, nan, nan], dtype='float64')] are in the [index]"
Thank you!
-
How to delete certain number from a list in list using the index in python?
I have a list in a list, and I am trying to delete the third number of each sublist, but every time I am getting an error
TypeError: list indices must be integers or slices, not list
a = [[0.0, 0.0, 0.0], [0.19, 0.36, 0.0], [0.24, 0.42, 0.0], [0.16, 0.08, 0.0], [0.05, -0.57, 0.0] ]
Desired result:-
a_updated = [[0.0, 0.0], [0.19, 0.36], [0.24, 0.42], [0.16, 0.08], [0.05, -0.57] ]
In the second part of my code, I wanted to merge this sublist according to a dictionary shown below, for example, the first value of dictionary:-
1: [1, 2]
shows the merging of 1st and 2nd values i.e.[0, 0, 0.19, 0.36]
.I guess this part of my code is right!
dict_a = { {1: [1, 2], 2: [2, 4], 3: [3, 5], 4: [4, 5] }
my attempt:-
dict_a = { 1: [1, 2], 2: [2, 4], 3: [3, 5], 4: [4, 5]} a = [[0.0, 0.0], [0.19, 0.36], [0.24, 0.42], [0.16, 0.08], [0.05, -0.57]] # first part for i in a: for j in a[i]: del j[2] print(j) #second part a_list = [] list_of_index = [] for i in dict_a: index= [] a_list.append(index) for j in dict_a_updated[i]: print(j-1) index.extend(a_updated[j-1]) print('index',index)
Error output -
file "D:\python programming\random python files\4 may axial dis.py", line 18, in <module> for j in X[i]: TypeError: list indices must be integers or slices, not list
-
C# MariaDB Update Function working in DataStore but not updating in .NET Application
My initial issue is that my database does not update when I call on this specific code. I am not sure if it is C# itself or the update query that I am calling.
string _connectionString = "validConnectionstring"; using (MySqlConnection _mySqlConnection = new MySqlConnection(_connectionString) { _mySqlConnection.Open(); using (MySqlCommand command = new MySqlCommand("UpdateProfileStatus", _mySqlConnection)) { command.Transaction = _mySqlConnection.BeginTransaction(); command.CommandTimeout = TimeSpan.FromSeconds(60).Seconds; command.CommandType = CommandType.StoredProcedure; command.Parameters.AddWithValue("@_username", "test"); command.Parameters.AddWithValue("@_status", true); command.ExecuteNonQuery(); } _mySqlConnection.Close(); }
I get no updates in my database but my console logs that the query is executed returning the value of 1 but there is no update that actually happens in my DB. Is there something in my code to reason why it is failing?
Here is the stored procedure that I have for the update command
CREATE PROCEDURE UpdateProfileStatus(_username VARCHAR(25), _status BOOL) UPDATE Profile SET status = _status WHERE username = _username;
I know the Stored Procedure works but am not sure why my .NET application is not responding to my procedure call. Is it something to do with my implementation of the parameters or is it my procedure itself?
-
Fastest Way To Execute a Query From 15,000,000+ rows
My database is around 15-20 billion lines long. I am planning to make a breach checking service (similar to leakcheck). After putting a small part of the database on a MariaDB test database I had installed, I noticed it takes around 4 seconds to run a query from a small portion (18m) of my total amount (15-20b) of rows. The table is formatted as follows: breach_id email password username ip phone number
Everything is in 1 table. If anyone can help me get the query time to under 1s I would be so appreciative.
VPS CPU: AMD EPYC 7313 16-Core Processor Ram: 62G Mem, 1.5G Swap Storage: 5TB
I have no issue in changing the database I use. The service will be non-profit and aims to help people
response of
show create table data_info;
+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | data_info | CREATE TABLE `data_info` ( `database_id` int(11) DEFAULT NULL, `email` varchar(255) DEFAULT NULL, `password` varchar(255) DEFAULT NULL, `username` varchar(255) DEFAULT NULL, `ip` varchar(255) DEFAULT NULL, `phone_number` varchar(255) DEFAULT NULL, FULLTEXT KEY `email` (`email`,`password`,`username`,`ip`,`phone_number`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 | +-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.000 sec)
MariaDB [hackcheck]> explain select * from data_info where email = "fbi" -> ; +------+-------------+-----------+------+---------------+------+---------+------+----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+-----------+------+---------------+------+---------+------+----------+-------------+ | 1 | SIMPLE | data_info | ALL | email | NULL | NULL | NULL | 18233866 | Using where | +------+-------------+-----------+------+---------------+------+---------+------+----------+-------------+
-
Sequence number, incorrect number
In a spring boot application, I have this entity
@Data @NoArgsConstructor @AllArgsConstructor @Entity public class User {
@Id @GeneratedValue(generator="user_id_seq") @SequenceGenerator(name="user_id_seq",sequenceName="user_id_seq", allocationSize=1) Long id ...
}
In mariadb, when I check the sequence
CREATE OR REPLACE SEQUENCE `user_id_seq` start with 1 minvalue 1 maxvalue 9223372036854775806 increment by 1 cache 1000 nocycle ENGINE=InnoDB
In db, I have only 5 user.
select id from `user` u
This query return
1002 1004 1005 2007 3001
Why it's not 1, 2, 3, 4, 5
Is it beaucause of the cache 1000?
-
MariaDB docker not starting due to Innodb error
I have a local docker environment (MAC) for my Laravel set-up. The mariadb container in
docker-compose.yml
is defined asimage: mariadb:10.6 container_name: ct-mariadb restart: unless-stopped ports: - "3306:3306" volumes: - ./docker/storage/mysql:/var/lib/mysql environment: MYSQL_DATABASE: .... MYSQL_USER: .... MYSQL_PASSWORD: .... MYSQL_ROOT_PASSWORD: .... SERVICE_TAGS: dev SERVICE_NAME: mysql networks: - laravel
The set-up used to work fine, but after testing some manual DB changes it corrupted. When (re-)starting the containers via
docker-compose up -d --build
the mariadb container fails to start. In the container logs the following can be found:2022-05-05 23:13:17+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.7+maria~focal started. 2022-05-05 23:13:18+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2022-05-05 23:13:18+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.7+maria~focal started. 2022-05-05 23:13:18+00:00 [Note] [Entrypoint]: MariaDB upgrade not required 2022-05-05 23:13:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.7+maria~focal started. 2022-05-05 23:13:19+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2022-05-05 23:13:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.7+maria~focal started. 2022-05-05 23:13:19+00:00 [Note] [Entrypoint]: MariaDB upgrade not required 2022-05-05 23:13:20+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.7+maria~focal started. 2022-05-05 23:13:20+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2022-05-05 23:13:20+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.6.7+maria~focal started. 2022-05-05 23:13:21+00:00 [Note] [Entrypoint]: MariaDB upgrade not required 2022-05-05 23:13:18 0 [Note] mariadbd (server 10.6.7-MariaDB-1:10.6.7+maria~focal) starting as process 1 ... 2022-05-05 23:13:18 0 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive 2022-05-05 23:13:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2022-05-05 23:13:18 0 [Note] InnoDB: Number of pools: 1 2022-05-05 23:13:18 0 [Note] InnoDB: Using ARMv8 crc32 + pmull instructions 2022-05-05 23:13:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 2022-05-05 23:13:18 0 [Note] InnoDB: Using Linux native AIO 2022-05-05 23:13:18 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728 2022-05-05 23:13:18 0 [Note] InnoDB: Completed initialization of buffer pool 2022-05-05 23:13:18 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=507568654,515471506 2022-05-05 23:13:18 0 [ERROR] InnoDB: Missing FILE_CREATE, FILE_DELETE or FILE_MODIFY before FILE_CHECKPOINT for tablespace 230 2022-05-05 23:13:18 0 [ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption 2022-05-05 23:13:18 0 [Note] InnoDB: Starting shutdown... 2022-05-05 23:13:19 0 [ERROR] Plugin 'InnoDB' init function returned error. 2022-05-05 23:13:19 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2022-05-05 23:13:19 0 [Note] Plugin 'FEEDBACK' is disabled. 2022-05-05 23:13:19 0 [ERROR] Unknown/unsupported storage engine: InnoDB 2022-05-05 23:13:19 0 [ERROR] Aborting
I've tried to delete all containers, images and volumes several times via the Docker Desktop UI and with
docker rm -f $(docker ps -a -q) docker rmi -f $(docker images -q)
but after starting up the containers I get the exact same error again.I fail to understand why/how after completely deleting all containers, images and volumes this error can still come back.
Any help is much appreciated.
-
Is there a limit to the number of transactions that can be queued up for a row lock in MySQL/InnoDB?
I'm working on an application that sees thousands of basically simultaneous login attempts. These login attempts depend on ADFS metadata that has to be refreshed from time to time for different groups of users. While I am building an automatic refresher when it gets to T-12hours until the refresh is required, I also want to handle the situation when the automatic refresh fails to occur. What I want is for the first login attempt that fails due to out-of-date metadata to trigger a refresh, but only the first, otherwise we'll get thousands of unnecessary requests to an ADFS server.
Since the ADFS metadata is stored in a MySQL table anyway, I thought using the InnoDB locking mechanism to handle this. If a login request fails due to out-of-date metadata, it will request a lock on the row holding the relevant metadata. If the lock is granted, it will check the refresh date on the metadata, and if that is out-of-date, it will trigger a refresh of the metadata and then write the new metadata to that row.
All subsequent logins that fail due to old metadata will also request their own locks, which will not be granted because the first request was granted a lock. As soon as the first request finishes updating the metadata, it will release the lock, and the next lock will be granted. That request will check the refresh date, see that it does not need to be refreshed, and continue as normal with the new metadata.
My question is, can MySQL/InnoDB handle, say, 10,000 transactions waiting for a lock on a single row? If there is a limit, can the limit be changed?