-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Open
Description
Description
When using pdo_dblib, making 2 queries at the same time using the same conection results in corrupt statements
The keys are correct, but the results are from the last query
It's as if both statements are somehow sharing the same cursor under the hood
The following code:
<?php
$user = "REDACTED";
$pass = "REDACTED";
$host = "REDACTED";
$db = "REDACTED";
$con = new \PDO("dblib:host=$host;dbname=$db;version=7.3;charset=UTF-8",$user,$pass,array(
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
));
$sta = $con->query("SELECT *
FROM (
VALUES ('A0'),
('A1'),
('A2')
) AS MyTable(A)");
$stb = $con->query("SELECT *
FROM (
VALUES ('B0'),
('B1'),
('B2')
) AS MyTable(B)");
var_export($sta->fetch()); echo "\n";
var_export($stb->fetch()); echo "\n";
var_export($sta->fetch()); echo "\n";
var_export($stb->fetch()); echo "\n";
Resulted in this output:
array (
'A' => 'B0',
)
array (
'B' => 'B1',
)
array (
'A' => 'B2',
)
false
But I expected this output instead:
array (
'A' => 'A0',
)
array (
'B' => 'B0',
)
array (
'A' => 'A1',
)
array (
'B' => 'B1',
)
PHP Version
PHP 8.2.7
Operating System
Ubuntu 23.04
Metadata
Metadata
Assignees
Type
Projects
Milestone
Relationships
Development
Select code repository
Activity
nielsdos commentedon Jun 29, 2023
Reproducible on 8.1+
I think you're right it fetches it from the connection, the code to fetch rows is
dbnextrow(H->link)
which doesn't take into account a cursor.I don't know FreeTDS's API well enough but on a quick glance I don't see a next row function that works on a cursor.
harrison3000 commentedon Jul 6, 2023
After a bit of research, FreeTDS doesn't suport multiple simultaneous queries on the same connection and with good reasons to do so.
Super old PHP versions (I'm currently migrating a system to a newer PHP version, that's how I discovered the bug) used to fetch everything at once on query time, that behavior was more in line with how the TDS protocol works but used a lot of memory for larger results
My sugestion is a somewhat hybrid aproach: keep on lazily loading rows, but as soon as another simultaneous query is made we can get all the remaining rows and store them on memory to free the connection for the newer query, that way we have the best of both words: low memory usage for serialized queries and the possibility of doing parallel queries (at the cost of higher memory usage).