I just started to use FSUtil to read files and have 2 things:
As described in Help, the "Read" method should return a Blob in my case and read the remaining (complete) file (in my case 1021 bytes), but the .size of the blob writes 0. When I try "tf.Read(500)", 500 will be printed. Is the description wrong or do I interpret something wrong? I want to get the blob and the complete file in it and I've thought it must be the shortest way.
When testing the "OpenFile" method with a not existing file then no exception happens. The exception happens after the "Read" method. As designed? ... and the catched error message is "(0x80004005)"?
Looks like a bug, we'll fix that. As a workaround you can use tf.Read(tf.Size());
It doesn't throw exceptions, you can check tf.error after open file - if it's not 0 if means the file failed to open (the code you get back is a Win32 error code).
Blobs are really meant for binary data, there isn't an easy way to turn them into a string. You'd have to loop through it a byte at a time and build up the string that way.
If you want to read text files then Windows has helper objects for that built-in which may be a better fit, depending on what you're trying to do. The whole reason we added File and Blob was to support binary data, which Windows doesn't have built-in support for.
I want to read UTF-8 files (incl. BOM) from e.g. a .m3u8 file which isn't supported by jscript OpenTextFile (only Unicode (16bit) or ASCII or system default - but not UTF-8 .. tried it but not working).
So I've thought I use the DO objects, read the complete .m3u8 into a blob and then use StringTools.Decode with utf-8 ... to get a printable string ...
Yes that should work, since StringTools takes a Blob in and gives a string out. As long as the data actually is utf-8 formatted that should work fine. You just can't print the Blob data directly.
Ok, after reading the complete file into the blob, I decode it with "utf-8" and it works for 99 %.
var st = DOpus.Create().StringTools().Decode(tfb, "utf-8");
DOpus.Output("Content: [" + st + "].");
Little problem is the BOM header. As you can see the binary bytes change the font (from the beginning line!) and 2 "spaces" after "[" on the first line.
I tried "auto" instead of "utf-8" but - ok - not working, results in "Invalid procedure call or argument".
It would be great to support something like Decode(tfb, "utf-8", true) and "true" means "auto-detection". So if a BOM for UTF-8, UTF-16LO, etc. exists - this will be used to decode and if no valid BOM exists then the 2nd parameter is the fallback codepage.
This would help to avoid extra coding lines in each script ... and I have to read out my ID3 from WAVs later - and a BOM could be stored for each field.
However, in the meantime I will overread the 3 BOM bytes at beginning.