I don't see a problem with your approach:
- A hash cannot be reversed
- It is not sufficient in your approach to get a hash input leading to the same hash result, instead one would need to get the original hash input
- The shorter your read-only slug is, the easier it is to get candidates for the read-write slug, but the more candidates will be wrong. With a longer slug it is less likely to get the wrong candidates, but much harder to get any candidates at all. So it will be too hard to find the real read-write slug in both cases
- The addition of a secret salt per document further complicates the problem for the attacker
That is assuming though that the read-write slug is strong and not kind of predictable, i.e. the attacker would not be able to rank/filter the possible candidates depending on how likely these are. This might be a wrong assumption though if "the URL slug can be defined by the user" means that the user will actually make up the (probably weak) slug by its own compared to let the system generate a new sufficiently long and random slug and select this.
With this assumption I don't think you need a deliberately slow hash for this, a common fast hash like sha-256 should be sufficient.
The length of the read-only slug is relevant though to protect against brute forcing slugs to get read-only access. The proposed 16..20 characters should be sufficient provided these are from a wide enough range of characters. If you just shorten the usual hexadecimal encoded hash this will be only 64..80 bits of security, which might still be sufficient when every attempt requires to check the specific URL with your web server. When encoding the hash as base64 and then shorten it to the same length you get 96..120 bits of security which is even better without impacting usability.