I regularly take backups to tape and archive stuff to tape. I'd like to keep track of which files are on which tape. So I'd like to redirect the output of tar -tvf /dev/tape0 to a *.txt file which I'd parse with awk to something that contains formatting. So something like this:
<code>
drwxrwxrwx 138862/138862 0 2020-05-22 06:17 mnt/nas/photo/2020/05/21/charlie/
-rwxrwxrwx 1028/138862 25344783 2020-05-20 23:27 mnt/nas/photo/2020/05/21/charlie/IMG_8515.CR2
-rwxrwxrwx 1028/138862 25194429 2020-05-20 23:27 mnt/nas/photo/2020/05/21/charlie/IMG_8516.CR2
-rwxrwxrwx 1028/138862 25422944 2020-05-20 23:27 mnt/nas/photo/2020/05/21/charlie/IMG_8517.CR2
-rwxrwxrwx 1028/138862 25449415 2020-05-20 23:27 mnt/nas/photo/2020/05/21/charlie/IMG_8518.CR2
-rwxrwxrwx 1028/138862 25411594 2020-05-20 23:27 mnt/nas/photo/2020/05/21/charlie/IMG_8519.CR2
</code>
The only problem so far is that the text files that comes out of reading an entire tape full of small files grows quite large (like 6MB and above). I tried to parse these tar outputs with awk so that it becomes a dokuwiki table syntax. Very nice, but sometimes dokuwiki just fails to load the page. I guess simply because it takes too much time and/or memory to parse the file since there is so much data to parse.
What are the timeouts/memory limits that are "inside" dokuwiki? Maybe I should look at my web server (php.ini?)as well?
Or are there much more efficient ways for loading (very) large pages? Just plain text with fixed width tabs or so?
<edit>
I just noticed that code tags seem to render way faster than tables. Perhaps that can help me out.
</edit>