Square brackets stripped from utm_term field when sent to redshift

Has anyone else encountered a situation where a URL such as this:  


the utm parameters are interpreted correctly by Segment analytics.js and parsed out to the following (nested within context):   

"campaign": {
  "source": "google",
  "medium": "paid_search",
  "name": "Some Campaign",
  "term": "[exact match keyword]"

But, when it gets to redshift, the square brackets around the keyword in `term` are stripped out:  

term: exact match keyword

Any way to prevent this?  

4replies Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
  • I looked into this and it's happening because our ETL process is seeing the bracketed data as an array,  and the process is setup to always stringify arrays.  Right now there's not a great workaround for this,  but I'd love to hear more about your use case so I can share with our warehouses team.  How is the data treated differently in your analysis when there are brackets vs. no brackets?

    Reply Upvote
  • Thanks Brantley,  

    Not my choice to use square brackets,  They are actually coming automatically from Google AdWords. In Adwords, square brackets around a search indicate [exact match] searches.  https://support.google.com/adwords/answer/2497836?hl=en  

    Can you tell me:  Knowing what happens when the ETL process sees those square brackets,  what type of content would prompt the ETL process to do something other than stripping off the brackets:  

    1. [some exact keyword] -> ETL -> some exact keyword  

    2. [some keyword, with a comma] -> ETL -> ??  

    Does that makes sense? Anything else I am missing?

    Reply Upvote
  • Hey Brantley -  

    Either you updated your post or I completely mis-read,  to answer your last question:  When coming from Paid Search, these three utm_term values:  

    • some search term  
    • "some search term"  
    • [some search term]  

    Mean different things,  which is why we were hoping to preserve the square brackets.  Not a big deal though,  at least now that we know what is happening we can keep it in mind on the data side.

    Reply Upvote
  • Hey Andy,  

    I did update my answer to be applicable to other folks.  I get what you're saying and your use case.  We need to do a bit more digging on our end to see if there is a good workaround.  Would you mind submitting a ticket here - https://segment.com/contact/support/integrations - with the query that you're running so I can provide an example to our database engineering team to dig in to?  


    Reply Upvote