Maximo Open Forum

 View Only
  • 1.  Coerce json message creation to retain Spanish language chars stored in db, not convert to ascii tags?

    Posted 24 days ago
    Edited by Christopher Winston 24 days ago

    Hi all, I'm moving a slightly different version of a recent "Question to be Answered" into this Discussion type entry. 

    I've noticed while working on JSON formatted integration messages that Spanish language characters that are stored as such in the database (Oracle) are translated into their ascii tag format during the creation of JSON messages: (when doing a GET on an OS or OSLC endpoint, when using Publish Channel with Publish JSON, when using Publish Channel with JSON Map). For example, coordinará, stored in an ALN attribute in the db, becomes coordinar\u00e1 in the JSON message / response.

    The receiving system of the data is going to save it, and requires saving the original chars. 

    Is there any way to change that behavior such that the text values provided by maximo in the JSON message remain as stored in Oracle? , 

    Is there some other method to post process all message content on an end point and replace back the original characters?

    Anyone else working with UTF-8 or ISO-8859-1 support challenges in their integrations?

    One edit--I can POST the characters into maximo successfully, but when I GET the affected record back, the char substitution occurs. Also, mx 7613.


    #Customizations
    #Integrations
    #Utilities

    ------------------------------
    Robert Matthews
    ------------------------------



  • 2.  RE: Coerce json message creation to retain Spanish language chars stored in db, not convert to ascii tags?

    Posted 17 days ago
    Edited by Erin Pierce 17 days ago

    So, no answers for this question so far, but adding here that I observe / are also escaped in outbound json, \/. 

    Perhaps we will explore a user exit script to undo these serialization conditions. tbd. 

    edit: I have just found this https://www.ibm.com/support/pages/apar/IT16439 not sure if it helps me yet.

    ------------------------------
    Robert Matthews
    ------------------------------



  • 3.  RE: Coerce json message creation to retain Spanish language chars stored in db, not convert to ascii tags?

    Posted 14 days ago

    This sounds like an encoding problem where the database or some other part in the chain is saving the JSON in something other than UTF-8.  You mention ASCII, which only has 128 characters and is likely not compatible with the characters your are trying to encode.  You mention Oracle as your database, so here is an Oracle article explaining why character sets matter and how they impact things https://blogs.oracle.com/timesten/post/why-databasecharacterset-matters

    This likely is not a JSON issue but a character encoding issue and tracing which character sets are in use may help you track down where the problem is.

    Jason



    ------------------------------
    Jason VenHuizen
    Sharptree
    ------------------------------



  • 4.  RE: Coerce json message creation to retain Spanish language chars stored in db, not convert to ascii tags?

    Posted 12 days ago

    Thanks for the link I will see if it helps identify the offending bit. :-)
    As related in the intitial post, characters sent into maximo, whether directly from the ui or via interface, are stored and are UI and sql queryable from a sql client as they were entered. Here is the set, below. So, coordinará, entered directly in the mx UI or from iface is stored in the db (Oracle) and when queried in the Maximo UI or sql queried is returned as coordinará.
    When it is processed outbound via Publish Channel or provided as a response to an OSLC or OS end point GET call, it is returned as coordinar\u00e1.
    Perhaps there somewhere / somewheres in the MIF configs where character encoding can be set. This doesn't seem? to be an Oracle thing, but could be.

    In the meantime:

    • I understand there is a straightforward way using a user exit script (PUBLISH.<PublishChannel>.USEREXIT.OUT.BEFORE) to do a payload-level find and replace for Publish Channel generated messages out, effectively modding the irData.
    • For the OSLC response, it seems like replacing values field by field is the probable approach (Set values in fields by using the overrideValues(ctx) function)? Is there a payload level scripting approach for find and replace for a GET response vs field by field syntax?

    Again thanks for the encoding tip.

    Table of chars where outbound messaging processes give the left hand column while the right hand is what exists and works inbound and internally at the ui and db levels:

    \u00e1	á
    \u00e9	é
    \u00ed	í 
    \u00f3	ó
    \u00fa	ú
    \u00f1	ñ
    \u00fc	ü 
    \u00a1	¡
    \u00c1	Á
    \u00c9	É
    \u00cd	Í
    \u00d3	Ó
    \u00da	Ú
    \u00d1	Ñ
    \u00dc	Ü
    \u00bf	¿
    \/	/


    ------------------------------
    Robert Matthews
    ------------------------------