Not logged in. · Lost password · Register
Forum: General Help and Support Plugins RSS
Anyone "reverse engineered" a struct schema, complete with values, from a MySQL database?
Avatar
rkaa #1
Member since Jun 2019 · 6 posts · Location: Norway
Group memberships: Members
Show profile · Link to this post
Subject: Anyone "reverse engineered" a struct schema, complete with values, from a MySQL database?
Hi

I have the pleasure of administrating a Dokuwiki instance in an IT department, containing our "24/7" documentation.
To keep track of our physical servers, some scripts import info from openDCIM, writes a Dokuwiki-page for each server, complete with dataentry, keywords and values for server-rooms, rack-ID etc.

Some wget and "datarelated" magic later, the main documentation pages, where only server names are initially known through the dataentry, nicely list rows with servers, location and rack.

Management now wants to see if the wiki can be re-vamped into a CMS-alike thing, doubling as a-kind of Service Catalogue,  with jobs, products, data flows and customers.. and then some. *And* documentation.

The relevant information is currently spread all over, accessible via APIs, MySQL, or at worst by scp/rsync for some serious post processing.

I guess implementing some/all their wishes would quickly increase the wiki page number by 5-15k.

I'm considering moving from "data" to "struct" because of better aggregations and layout possibilities.
But I don't have the resources to populate the required struct fields by hand.
That job need to be scripted for - somehow. I'm used to modify data entries with bash scripts, sed, awk or whatever it takes to knead a few thousand pages in a matter of seconds. But "database injection" is a new cup of tea.

So thus the question: Is "reverse engineering" a struct schema a viable idea?
(Or am I in it way over my head here, since I at all have to ask..)

If it wasn't clear already, the Goal is to utilise existing values from for instance a MySQL database, to avoid months of manual edits.

Thankful for any ideas or comments!


Kind regards;
Kristin
Oslo, Norway
Avatar
pop (Moderator) #2
Member since Nov 2016 · 207 posts · Location: near Basel. Switzerland
Group memberships: Global Moderators, Members
Show profile · Link to this post
You can import CSV fles into struct tables, if that helps at all. You only have to be careful not to add any rows which already are in the table. The importer does not check for duplicates but cheerfully imports whatever is in the CSV file.

However, the import must be run manually by the admin of the wiki. I don't think that you can do a scheduled import or an import triggered by a user request.

If you don't want  one wiki page for each table row to be present, define your struct table as a "lookup".
Avatar
andi (Administrator) #3
User title: splitbrain
Member since May 2006 · 3499 posts · Location: Berlin Germany
Group memberships: Administrators, Members
Show profile · Link to this post
The struct plugin also exposes parts of the data management tasks via the XML RPC API which could be used to script regular updates.
Read this if you don't get any useful answers.
Lies dies wenn du keine hilfreichen Antworten bekommst.
Avatar
rkaa #4
Member since Jun 2019 · 6 posts · Location: Norway
Group memberships: Members
Show profile · Link to this post
Thank you pop and andi, for swift replies and good pointers. Does sound like time-savers.
I missed the saveData() method earlier - that would be it.

Haven't quite dropped the (admittedly heretic) idea of toggling a struct.sqlite3 file more directly, though.
I remember some Tcl from way back, and it seems to have a decent SQLite interface.

What still swirls around in my mind is something like..:

- Convert a MySQL db to sqlite format. Tools for this are plentyful, good HOWTO's, culprits and fixes listed
- Script some Tcl to have sqlite3 read from the converted db, then write desired values to struct.sqlite3.

The basic schema would already be in place and assigned, but populating field values for the various pages needs boosting.

Then again.. maybe "the wheel" I need is aleady invented. Reading up on XML-RPC now (and considering learning curves..)


Thanks again.

Kind regards;
Kristin
This post was edited on 2019-06-26, 02:19 by rkaa.
Avatar
rkaa #5
Member since Jun 2019 · 6 posts · Location: Norway
Group memberships: Members
Show profile · Link to this post
Built a CSV for a lookup schema with ~800 servers, locations and racks:
Took 2-3 seconds to import.

Replicating what I used data's "datarelated" for also worked like charm.
No caching issues. Nice.

Still pondering maintenance.


Related enhancement issue #42: support import from data
https://github.com/cosmocode/dokuwiki-plugin-struct/issues…

A user-contributed approach is linked to in one of the comments there (zip, PHP).
Not tested.


Kind regards;
Kristin
Close Smaller – Larger + Reply to this post:
Verification code: VeriCode Please enter the word from the image into the text field below. (Type the letters only, lower case is okay.)
Smileys: :-) ;-) :-D :-p :blush: :cool: :rolleyes: :huh: :-/ <_< :-( :'( :#: :scared: 8-( :nuts: :-O
Special characters:
Go to forum
Imprint
This board is powered by the Unclassified NewsBoard software, 20150713-dev, © 2003-2015 by Yves Goergen
Current time: 2019-11-17, 12:24:19 (UTC +01:00)