The "Opus Manual" menu at the top right of the forum has a link to the main three scripting sections, if that's what you need.
The script could run the image conversion command at X quality, keeping the original file unchanged, then check the size of the result and re-run it at a lower quality (and/or resolution), until the size is under the required limit.
Using a loop seems like it would be awkward in batch environment of multiple photos and could be time consuming. Also you wouldn't want to "step down" the size when using a lossy format or by the time you got to your desired size the picture would have lost a ton of details and would be pixelated. So you would have to keep track of the results and keep trying off a master.
I would think that mathematically it would be possible to ascertain a size estimate given the dimensions, compression settings and dpi. I have no idea how to pull that off however...
Is there a way to ascertain the projected size before converting it? Perhaps with a function?
There's no other way you can do it. You cannot predict how large a JPEG will be without making it. JPEG compression is really fast so it would not take long either.
Re-read what I suggested. You keep the original and always use that as the source image.
Not that I know of, at least. Compression isn't predictable like that.
Is it possible for the conversion function to perform the conversion without actually writing it to disk and instead just count up the bytes it would have written and return it to the script?
This would cut disk i/o down to just reads and avoid having to deal with the files.
I realize it's a "feature change" request here. I just don't know if I'm going to be able to code something like this myself.