之前是某公司的ppt 现在找不到了

找到了 妈的

https://www.slideshare.net/slideshow/view?login=Eweaver&preview=no&slideid=1&title=efficient-pagination-using-mysql

https://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down

MySQL cannot go directly to the 10000th record (or the 80000th byte as your suggesting) because it cannot assume that it’s packed/ordered like that (or that it has continuous values in 1 to 10000). Although it might be that way in actuality, MySQL cannot assume that there are no holes/gaps/deleted ids.

So, as bobs noted, MySQL will have to fetch 10000 rows (or traverse through 10000th entries of the index on id) before finding the 30 to return.

EDIT : To illustrate my point

Note that although

SELECT * FROM large ORDER BY id LIMIT 10000, 30
would be slow(er),

SELECT * FROM large WHERE id > 10000 ORDER BY id LIMIT 30
would be fast(er), and would return the same results provided that there are no missing ids (i.e. gaps).

MySQL 不能直接转到第 10000 条记录(或您建议的第 80000 字节),因为它不能假设它是这样打包/排序的(或者它在 1 到 10000 之间具有连续值)。尽管实际上可能是这样,但 MySQL 不能假设没有漏洞/间隙/删除的 id。
因此,正如 bob 所指出的,id在找到要返回的 30 行之前,MySQL 必须获取 10000 行(或遍历索引的第 10000 个条目)。

请注意,虽然

SELECT * FROM large ORDER BY id LIMIT 10000, 30
会很慢(呃),

SELECT * FROM large WHERE id > 10000 ORDER BY id LIMIT 30

将是fast(er),并且会返回相同的结果,前提是没有丢失的ids(即间隙)。