getattrlistbulk lists same files over and over on macOS 15 Sequoia

A customer of mine reported that since updating to macOS 15 they aren't able to use my app anymore, which performs a deep scan of selected folders by recursively calling getattrlistbulk. The problem is that the app apparently keeps scanning forever, with the number of scanned files linearly increasing to infinity.

This happens for some folders on a SMB volume.

The customer confirmed that they can reproduce the issue with a small sample app that I attach below. At first, I created a sample app that only scans the contents of the selected folder without recursively scanning the subcontents, but the issue didn't happen anymore, so it seems to be related to recursively calling getattrlistbulk.

The output of the sample app on the customer's Mac is similar to this:

start scan /Volumes/shares/Backup/Documents level 0 fileManagerCount 2847
continue scan /Volumes/shares/Backup/Documents new items 8, sum 8, errno 34
/Volumes/shares/Backup/Documents/A.doc
/Volumes/shares/Backup/Documents/B.doc
...
continue scan /Volumes/shares/Backup/Documents new items 7, sum 1903, errno 0
/Volumes/shares/Backup/Documents/FKV.pdf
/Volumes/shares/Backup/Documents/KFW.doc
/Volumes/shares/Backup/Documents/A.doc
/Volumes/shares/Backup/Documents/B.doc
...

which shows that counting the number of files in the root folder by using

try FileManager.default.contentsOfDirectory(atPath: path).count

returns 2847, while getattrlistbulk lists about 1903 files and then starts listing the files from the beginning, not even between repeated calls, but within a single call.

What could this issue be caused by?

(The website won't let me attach .swift files, so I include the source code of the sample app as a text attachment.)

Answered by DTS Engineer in 814122022

Answer One, a possible work around:

While looking through the log again today, I think I actually found the point the problem occurs, which is this log sequence:

2024-10-27 19:48:02.915-0400 kernel smbfs_enum_dir: Resuming enumeration for <Documents>
2024-10-27 19:48:02.915-0400 kernel smbfs_find_cookie: Key 5, offset 1130, nodeID 0x400000007bd6c name <Front Door before and after.bmp> for <Documents>
2024-10-27 19:48:02.915-0400 kernel smbfs_enum_dir: offset 1130 d_offset 0 d_main_cache.offset 0 for <Documents>
2024-10-27 19:48:02.915-0400 kernel smbfs_fetch_new_entries: fetch offset 1130 d_offset 0 cachep->offset 0 for <Documents>
2024-10-27 19:48:02.915-0400 kernel smbfs_fetch_new_entries: Main cache needs to be refilled <Documents>
2024-10-27 19:48:02.915-0400 kernel smbfs_fetch_new_entries: Dir has not been modified recently <Documents>
2024-10-27 19:48:02.915-0400 kernel smbfs_fetch_new_entries: Restart enum offset 1130 d_offset 0 for <Documents>

In other words, the failure here is occurring when you "return" to iterating the previous directory. That is something you could avoid/mitigate by removing/modifying the recursion "cycle" of your directory walk. Basically, what you'd do is this:

  1. iterate the directory with getattrlistbulk
  2. if(file)-> process normally
  3. if(directory)-> cache/store entry
  4. finish iteration of current directory
  5. for each directory in #3, return to #1

In concrete terms, if you have this hierarchy:

dir1
	file1
	dir2
		file2
	file3
	dir3
		file4
		dir4
			fileC
		file5
	file6

Currently, the order you process files is exactly the same as the order above:

iterate dir1
process file1
iterate dir2
process dir2/file2
process file3
iterate dir3
process dir3/file4
iterate dir3/dir4
process dir3/dir4/fileC
process dir3/file5
process file6

With the new approach, the same hierarchy would process as:

iterate dir1
process file1
process file3
process file6
iterate dir2
process dir2/file2
iterate dir3
process dir3/file4
process dir3/file5
iterate dir3/dir4
process dir3/dir4/fileC

This does add some additional book keeping and memory risk, however, I do think it's a "safer" approach overall. IF the issue is the SMB server (see answer 2 for other possibility), then issue is almost certainly caused by the complexity of tracking nested iteration. In other words, the problem isn't "iterating 10,000 files", it's RETURNING to that iteration after having iterated lots of other directories. The approach above removes that because, as far as the file system is concerned, you only ever iterate one directory. You can also use this as an easy opportunity to flatten your recursion, to there are some memory benefits as well.

Finally, flatting also help with the "larger" iteration context. As a simplified example, imagine this particular bug is that the server drops the least recent iteration anytime there are more than 10 or more simultaneous iterations. As far as the server is concerned, 10 apps iterating 1 directory looks exactly the same as a nested iteration 10 levels deep. Flattening the iteration obviously solves the second case, but probably helps the firt one as well- your single iteration never "blocks" (because you never recurse on getattrlistbulk), so your iteration is unlikely to very be "oldest". Something may still fail, but it won't be your app.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

SO, making sure this is clear, the performance "trick" here is to use your final scan storage format as your "intermediate" format instead of doing the entire scan in memory. If done properly, this means:

I'm not sure I understand. Do you mean that you would write the scan results to disk while the scan is in progress? Otherwise I don't get the "entire scan in memory" part.

You're probably right that resuming a scan is easier in the iterative case. I'm not supporting that case (yet), so for now I'll stick to recursion.

getattrlistbulk lists same files over and over on macOS 15 Sequoia
 
 
Q