<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     >
  <channel>
    <title>Where am I?</title>
    <link>http://blakeley.com/blogofile</link>
    <description>Performance, scalability, databases, and whatever comes up.</description>
    <pubDate>Fri, 07 Jul 2023 19:44:55 GMT</pubDate>
    <generator>Blogofile</generator>
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <item>
      <title>Deduplicating Search Results</title>
      <link>http://blakeley.com/blogofile/2014/06/08/deduplicating-search-results</link>
      <pubDate>Sun, 08 Jun 2014 12:34:56 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2014/06/08/deduplicating-search-results</guid>
      <description>Deduplicating Search Results</description>
      <content:encoded><![CDATA[<p>So you're writing a MarkLogic application, and this question comes up:
How do we deduplicate search results?</p>
<p>In one sense MarkLogic will never return duplicates from a database lookup.
A single XPath, <code>cts:search</code>, or <code>search:search</code> expression will always
return unique nodes, as defined by the
<a href="https://www.w3.org/TR/xpath-functions/#func-is-same-node">XPath <code>is</code> operator</a>.</p>
<p>But your application might have its own, content-dependent definition
of a duplicate. This might depend on just a subset of the XML content.
For example you might be storing news articles pulled from different
media sources: newspapers, magazines, blogs, etc.
Often the same news story will appear in different sources,
and sometimes the text will be identical or extremely close.
When a user searches for the hot story of the day
you want to have all the variations available,
but the search results should roll them up together on the page.
You can see something like this if you search
<a href="https://news.google.com">Google News</a>.</p>
<p>One good strategy is to avoid duplicates entirely,
by ensuring that your documents have meaningful URIs.
Construct the URI using the same information
that determines whether or not a document is a duplicate.
This way if content arrives that duplicates existing content,
it turns out to have the same URI.
Then you are free to update the database with the latest copy,
ignore it, throw an error, or call for help.
If every news story has a dateline
and an author byline, we could construct document URIs
based on the date, location, and byline:
something like <code>/news/2014/05/30/washington/jones</code>.</p>
<p>But maybe that isn't a very good solution for our news application.
Remember that we want to search for articles,
but we only want one article per story.
So we have to store all the duplicate articles,
and we need query-time flexibility to display just one article per story.</p>
<p>Clearly we will need to generate a story-id for each story,
one that remains the same no matter how different articles
are presented. That might use a mechanism similar
to the URI computation above, except that we would put the result
in an element and it would not be unique.
We could use the same facts we were going to use in the document URI:</p>
<pre><code>&lt;story-id&gt;2014-05-30|washington|jones&lt;/story-id&gt;
</code></pre>
<p>Once we have our application set up to generate <code>story-id</code> elements,
we could try a brute-force approach.
Search the database, then walk through the search results.
Extract each <code>story-id</code> value and check it
against a list of previously-seen story-id values.
We could use a map for that.
If the <code>story-id</code> has already been seen, ignore it.
Otherwise put the <code>story-id</code> in the map and return the article.</p>
<pre><code>(
  let $search-results := search:search(...)
  let $seen := map:map()
  for $article in $search-results
  let $story-id as xs:string := $article/story-id
  where not(map:contains($seen, $story-id))
  return (
    map:put($seen, $story-id, $story-id),
    $article))[$start to $stop]
</code></pre>
<p>But there are problems with this approach. Pagination is tricky
because we don't know how many duplicates there will be.
So we have to ask the database for a lot of results,
maybe all of them at once, and then filter and paginate in user code.
This gets more and more expensive as the result size increases,
and trickier to manage as the user paginates through the results.
If a search matches a million articles, we might have to retrieve and check
all the matches before we can display any results.
That's going to be slow, and probably limited by I/O speeds.
Nowadays we could throw SSD at it, but even SSD has limits.</p>
<p>Another problem with the brute-force approach is that
facets generated by the database will not match the deduplicated results.
You might have a facet on author that shows 1000,
but deduplicated filters out all but 100 of the articles.</p>
<p>So let's look at another approach. Instead of deduplicating after we search,
let's deduplicate before we search. That might sound crazy,
but we have a couple of powerful tools that make it possible:
<a href="https://docs.marklogic.com/cts:value-co-occurrences"><code>cts:value-co-occurrences</code></a>
and <a href="https://docs.marklogic.com/cts:document-query"><code>cts:document-query</code></a>.
The idea is to deduplicate
based on the co-occurrence of <code>story-id</code> and document URI,
without retrieving any documents.
Then we query the database again,
this time fetching only the non-duplicate documents
that we want to return.</p>
<p>Each article is stored as a document with a unique document URI.
We enable the
<a href="https://docs.marklogic.com/guide/search-dev/lexicon#id_50782">document URI lexicon</a>
and we also create an element-range index on the element named <code>story-id</code>.
As described above, we construct a <code>story-id</code>
for every article as it arrives and add it to the XML.
This <code>story-id</code> is our deduplication key: it uniquely identifies a story,
and if multiple articles might have the same <code>story-id</code> value
then they are treated as duplicates.</p>
<p>A deduplication key is application-specific, and might be anything.
An application might even have multiple deduplication keys
for different query types.
However it's essential to have a deduplication key for every document
that you want to query, even if only some documents will have duplicates.
The technique we're going to use will only return documents
that have a deduplication key.
An article with no <code>story-id</code> simply won't show up in the co-occurrence
results, so it won't show up in search results either.</p>
<p>Here's some code to illustrate the idea. Start with <code>$query-original</code>,
which is the original user query as a cts:query item.
We might generate that using 
<a href="https://docs.marklogic.com/search:parse"><code>search:parse</code></a>
or perhaps the <a href="https://github.com/mblakele/xqysp">xqysp</a> library.</p>
<pre><code>(: For each unique story-id there may be multiple article URIs.
 : This implementation always uses the first one.
 :)
let $query-dedup := cts:document-query(
  let $m := cts:values-co-occurrences(
    cts:element-reference(
      xs:QName('story-id'),
      'collation=http://marklogic.com/collation/codepoint'),
    cts:uri-reference(),
    'map')
  for $key in map:keys($m)
  return map:get($m, $key)[1])
(: The document-query alone would match the right articles,
 : but there would be no relevance ranking.
 : Using both queries eliminates duplicates and preserves ranking.
 :)
let $query-full := cts:and-query(($query-original, $query-dedup))
...
</code></pre>
<p>Now we can use <code>$query-full</code> with any API that uses cts:query items,
such as <code>cts:search</code>. In order to match, an article will have to match
<code>$query-original</code> and it will have to have one of the URIs
that we selected from the co-occurrence map.</p>
<p>Instead of calling <code>cts:search</code> directly, we might want to use 
<a href="https://docs.marklogic.com/search:resolve"><code>search:resolve</code></a>.
That function expects a cts:query XML element, not a cts:query item.
So we need a little extra code to turn the cts:query item
into an XML document and then extract its root element:</p>
<pre><code>...
return search:resolve(
  document { $query-full }/*,
  $search-options,
  $pagination-start,
  $pagination-size)
</code></pre>
<p>Many search applications also provide facets. You can ask <code>search:resolve</code>
for facets by providing the right search options,
or you can call <a href="https://docs.marklogic.com/cts:values"><code>cts:values</code></a> yourself.
Note that since facets are not relevance-ranked,
it might be a little faster to use <code>$query-dedup</code> instead of <code>$query-full</code>.</p>
<p>Speaking of performance, how fast is this? In my testing it added
an O(n) component, linear with the number of keys in the
<code>cts:values-co-occurrences</code> map. With a small map the overhead is low,
and deduplicating 10,000 items only adds a few tens of milliseconds.
But with hundreds of thousands of map items the profiler
shows more and more time spent in the XQuery FLWOR expression
that extracts the first document URI from each map item.</p>
<pre><code>  let $m := cts:values-co-occurrences(
    cts:element-reference(
      xs:QName('story-id'),
      'collation=http://marklogic.com/collation/codepoint'),
    cts:uri-reference(),
    'map')
  for $key in map:keys($m)
  return map:get($m, $key)[1])
</code></pre>
<p>We can speed that up a little bit by trading the FLWOR
for function mapping.</p>
<pre><code>declare function local:get-first(
  $m as map:map,
  $key as xs:string)
as xs:string
{
  map:get($m, $key)[1]
};

let $m := cts:values-co-occurrences(
  cts:element-reference(
    xs:QName('story-id'),
    'collation=http://marklogic.com/collation/codepoint'),
  cts:uri-reference(),
  'map')
return local:get-first($m, map:keys($m))
</code></pre>
<p>However this is a minor optimization, and with large maps
it will still be expensive to extract the non-duplicate URIs.
It's both faster and more robust than the brute-force approach,
but not as fast as native search.</p>
<p>Pragmatically, I would try to handle these performance characteristics
in the application. Turn deduplication off by default,
and only enable it as an option
when a search returns fewer than 100,000 results.
This would control the performance impact of the feature,
providing its benefits without compromising overall performance.</p>
<p>It's also tempting to think about product enhancements.
We could avoid some of this work if we could find a way
to retrieve only the part of the map needed for the current search page,
but this is not feasible with the current implementation
of <code>cts:values-co-occurrences</code>. That function would have to return
the co-occurrence map sorted by the score of each story-id.
That's tricky because normally scores are calculated for documents,
in this case articles.</p>
<p>One way to speed this up without changing MarkLogic Server
could be to move some of the work into the forests.
MarkLogic Server supports
<a href="https://docs.marklogic.com/guide/app-dev/aggregateUDFs">User-Defined Functions</a>,
which are C++ functions that run directly on range indexes.
I haven't tried this approach myself, but in theory you could write a UDF
that would deduplicate based on the <code>story-id</code> and URI co-occurrence.
Then you could call this function with
<a href="https://docs.marklogic.com/cts:aggregate"><code>cts:aggregate</code></a>.
This would work best if you could partition your forests
using the <code>story-id</code> value, so that any the duplicate values articles
are guaranteed to be in the same forest.
Used carefully this approach could be much faster,
possibly allowing fast deduplication with millions of URIs.</p>
<p>For more on that idea, see the documentation for
<a href="https://docs.marklogic.com/guide/admin/tiered-storage">Tiered Storage</a>
and the
<a href="https://developer.marklogic.com/learn/a-mapreduce-aggregation-function">UDF plugin tutorial</a>.
If you try it, please let me know how it works out.</p>]]></content:encoded>
    </item>
    <item>
      <title>Introduction to Multi-Statement Transactions</title>
      <link>http://blakeley.com/blogofile/2013/06/21/introduction-to-multi-statement-transactions</link>
      <pubDate>Fri, 21 Jun 2013 12:34:56 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2013/06/21/introduction-to-multi-statement-transactions</guid>
      <description>Introduction to Multi-Statement Transactions</description>
      <content:encoded><![CDATA[<p>If you are an old hand with MarkLogic, you are used to writing
update queries with implicit commits. Sometimes this
means restructuring your code so that everything can happen in one commit,
with no conflicting updates. In extreme cases you might
even decide to run multiple transactions from one query,
using <code>xdmp:invoke</code> or semicolons.
Historically this meant giving up atomicity.</p>
<p>Multi-statement transactions, introduced in MarkLogic 6,
promise a third way. We can write a transaction that spans
multiple statements, with an explicit commit or rollback.</p>
<p>For most updates it's probably best to stick with the old ways
and use implicit commits. But let's look at a concrete example
of a time when multi-statement transactions are the right tool
for the job.</p>
<p>Suppose you are using DLS (Document Library Services)
to manage your document versioning. But you have a special case
where you want to insert two discrete versions of the same document
atomically. That may sound odd, but I ran into that exact problem recently.</p>
<p>First we need to discover that there is a problem.
Let's bootstrap the a test document with DLS.</p>
<pre><code>import module namespace dls="http://marklogic.com/xdmp/dls"
  at "/MarkLogic/dls.xqy";
try {
  dls:document-delete('test', false(), false()) }
catch ($ex) {
  if ($ex/error:code ne 'DLS-UNMANAGED') then xdmp:rethrow()
  else if (empty(doc('test'))) then ()
  else xdmp:document-delete('test') }
;
import module namespace dls="http://marklogic.com/xdmp/dls"
  at "/MarkLogic/dls.xqy";
dls:document-insert-and-manage('test', false(), &lt;x id="x1"/&gt;)
</code></pre>
<p>Now let's write some XQuery to insert two versions in one update,
and see what happens.</p>
<pre><code>import module namespace dls="http://marklogic.com/xdmp/dls"
  at "/MarkLogic/dls.xqy";
dls:document-checkout-update-checkin(
  'test', &lt;x id="x2"/&gt;, "version two", true()),
dls:document-checkout-update-checkin(
  'test', &lt;x id="x3"/&gt;, "version three", true())
</code></pre>
<p>This throws an <code>XDMP-CONFLICTINGUPDATES</code> error, because these calls to DLS
end up trying to update the same nodes twice in the same transaction.
In implicit commit mode, aka "auto" mode, this is difficult to avoid.
We could ask MarkLogic to extend DLS with a new function
designed for this situation. But that is a long-term solution,
and we need to move on with this implementation.</p>
<p>So what can we do? We might read up on <code>xdmp:invoke</code>, <code>xdmp:eval</code>, etc.
If we are careful, we can write a top-level read-only query
that invokes one or more update transactions.</p>
<pre><code>(: Entry point - must be a read-only query. :)
xdmp:invoke(
  'update.xqy',
  (xs:QName('URI'), 'test',
   xs:QName('NEW'), &lt;x id="x2"/&gt;,
   xs:QName('NOTE'), "version two")),
xdmp:invoke(
  'update.xqy',
  (xs:QName('URI'), 'test',
   xs:QName('NEW'), &lt;x id="x3"/&gt;,
   xs:QName('NOTE'), "version three"))
</code></pre>
<p>This invokes a module called <code>update.xqy</code>, which would look like this:</p>
<pre><code>(: update.xqy :)
import module namespace dls="http://marklogic.com/xdmp/dls"
  at "/MarkLogic/dls.xqy";

declare variable $NEW as node() external ;
declare variable $NOTE as xs:string external ;
declare variable $URI as xs:string external ;

dls:document-checkout-update-checkin(
  $URI, $NEW, $NOTE, true())
</code></pre>
<p>This works - at least, it doesn't throw <code>XDMP-CONFLICTINGUPDATES</code>.
But we have lost atomicity. Each of the two updates runs
as a different transaction. This opens up a potential race
condition, where a second query updates the document
in between our two transactions. That could break our application.</p>
<p>There are ways around this, but they get complicated quickly.
They are also difficult to test, so we can never be confident
that we have plugged all the potential holes in our process.
It would be much more convenient if we could run multiple
statements inside one transaction, with each statement able
to see the database state of the previous statements.</p>
<p>We can do exactly that using a multi-statement transaction.
Let's get our feet wet by looking at a very simple MST.</p>
<pre><code>declare option xdmp:transaction-mode "update";

xdmp:document-insert('temp', &lt;one/&gt;)
;

xdmp:document-insert('temp', &lt;two/&gt;),
xdmp:commit()
</code></pre>
<p>There are three important points to this query.
1. The option <code>xdmp:transaction-mode="update"</code>
  begins a multi-statment transaction.
1. The semicolon after the first <code>xdmp:document-insert</code>
  ends that statement and begins another.
1. The <code>xdmp:commit</code> ends the multi-statement transaction
  by commiting all updates to the database.</p>
<p>This runs without error, and we can verify that <code>doc('temp')</code>
contains <code>&lt;two/&gt;</code> after it runs.
But how can we prove that all this takes place in a single transaction?
Let's decorate the query with a few more function calls.</p>
<pre><code>declare option xdmp:transaction-mode "update";

xdmp:get-transaction-mode(),
xdmp:transaction(),
doc('temp')/*,
xdmp:document-insert('temp', &lt;one/&gt;)
;

xdmp:get-transaction-mode(),
xdmp:transaction(),
doc('temp')/*,
xdmp:document-insert('temp', &lt;two/&gt;),
xdmp:commit()
</code></pre>
<p>This time we return some extra information within each statement:
the transaction mode, the transaction id, and the contents of the test doc.
The transaction ids will be different every time, but here is one example.</p>
<pre><code>update
17378667561611037626
&lt;two/&gt;
update
17378667561611037626
&lt;one/&gt;
</code></pre>
<p>So the document <code>test</code> started out with the old node <code>&lt;two/&gt;</code>,
but after the first statement it changed to <code>&lt;one/&gt;</code>.
Both statements see the same transaction mode and id.</p>
<p>Try changing the <code>xdmp:transaction-mode</code> declaration to <code>auto</code>, the default.
You should see the mode change to <code>auto</code>, and two different transaction-ids.
This tells us that in <code>update</code> mode we have a multi-statement transaction,
and in <code>auto</code> mode we have a non-atomic sequence of two different transactions.
Before MarkLogic 6, all update statements ran in <code>auto</code> mode.</p>
<p>Now let's apply what we've learned about MST to the original problem:
inserting two different versions of a managed document in a single transaction.</p>
<pre><code>import module namespace dls="http://marklogic.com/xdmp/dls"
  at "/MarkLogic/dls.xqy";

declare option xdmp:transaction-mode "update";

dls:document-checkout-update-checkin(
  'test', &lt;x id="x2"/&gt;, "version two", true())
;

import module namespace dls="http://marklogic.com/xdmp/dls"
  at "/MarkLogic/dls.xqy";
dls:document-checkout-update-checkin(
  'test', &lt;x id="x3"/&gt;, "version three", true()),
xdmp:commit()
</code></pre>
<p>As above, this code uses three important features:
1. Set <code>xdmp:transaction-mode="update"</code> to begin the MST.
1. Use semicolons to end one statement and begin another.
1. Use <code>xdmp:commit</code> to end the MST and commit all updates.</p>
<p>To abort a multi-statement transaction, use <code>xdmp:rollback</code>.</p>
<p>So now you have a new tool for situations where implicit commit
is a little too awkward. Try not to overdo it, though.
In most situations, the default <code>xdmp:transaction-mode="auto"</code>
is still the best path.</p>]]></content:encoded>
    </item>
    <item>
      <title>External Variables (Code Review, Part II)</title>
      <link>http://blakeley.com/blogofile/2012/09/28/external-variables-(code-review,-part-ii)</link>
      <pubDate>Fri, 28 Sep 2012 12:34:56 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2012/09/28/external-variables-(code-review,-part-ii)</guid>
      <description>External Variables (Code Review, Part II)</description>
      <content:encoded><![CDATA[<p>Remember when I talked about <a href="/blogofile/archives/518">XQuery Code Review</a>?
The other day I was forwarding that link to a client,
and noticed that I forgot to mention external variables.
I talked about <code>xdmp:eval</code> and <code>xdmp:value</code>
in the section titled <em>Look_for_injection_paths</em>,
and mentioned that it's usually better to use <code>xdmp:invoke</code> or <code>xdmp:unpath</code>,
which are less vulnerable to injection attacks.</p>
<p>But it can be convenient or even necessary to evaluate dynamic XQuery.
That's what <code>xdmp:eval</code> and <code>xdmp:value</code> are there for, after all.
I've even written tools like <a href="https://github.com/mblakele/presta">Presta</a>
to help you.</p>
<p>Used properly, dynamic queries can be made safe.
The trick is to <strong>never</strong> let user data directly into your dynamic queries.
Whenever you see <code>xdmp:eval</code> or <code>xdmp:value</code> in XQuery,
ask yourself "Where did this query comes from?"
If any part of it came from user input, flag it for a rewrite.</p>
<pre><code>(: WRONG - This code is vulnerable to an injection attack! :)
xdmp:eval(
  concat('doc("', xdmp:get-request-field('uri'), '")'))
</code></pre>
<p>Actually there are at least two bugs in this code.
There is a functional problem: what happens if the <code>uri</code> request field
is <code>fubar-"baz"</code>? You might not expect a uri to include a quote,
and maybe that will never legitimately happen in your application.
But if that request-field does arrive, <code>xdmp:value</code> will throw an error:</p>
<pre><code>XDMP-UNEXPECTED: (err:XPST0003) Unexpected token syntax error
</code></pre>
<p>That's because you haven't properly escaped the uri in the dynamic XQuery.
And you could escape it. You could even write a function to do that for you.
But if you miss any of the various characters that need escaping,
<code>XDMP-UNEXPECTED</code> will be there, waiting for you.</p>
<p>So far we've only talked about innocent mistakes. But what if someone out there
is actively hostile? Let's say it's me. If I know that your web service
expects a <code>uri</code> request-field, I might guess that your code looks something like
the code above, and try an injection attack.</p>
<p>After a little trial and error, I might find that sending
<code>uri=x"),cts:uris(),("</code> returns a list of all the documents in your database,
whether you want me to see them or not. Then I can send something like
<code>uri=x"),xdmp:document-delete("fubar</code>. If that document exists,
and security isn't tight... it's gone. Or maybe I will decide to try
<code>xdmp:forest-clear</code> instead.</p>
<p>In SQL we use bind variables to solve both of these problems.
Any user input binds to a variable inside the SQL,
and the database driver takes care of escaping for us.
We no longer have to worry about obscure syntax errors or injection attacks,
as long as we remember to use variable for all externally-supplied parameters.
In XQuery these are known as external variables.</p>
<pre><code>(: Always use external variables for user-supplied data. :)
xdmp:eval(
  'declare variable $URI as xs:string external ;
   doc($URI)',
  (xs:QName('URI'), xdmp:get-request-field('uri')))
</code></pre>
<p>The syntax is a little odd: that second parameter is a sequence of
alternating QName and value. Because XQuery doesn't support nested sequences,
this means you can't naively bind a sequence to a value.
Instead you can pass in XML or a map,
or use a convention like comma-separated values (CSV).</p>
<pre><code>(: Using XML to bind a sequence to an external variable. :)
xdmp:eval(
  'declare variable $URI-LIST as element(uri-list) external ;
   doc($URI-LIST/uri)',
  (xs:QName('URI-LIST'),
   element uri-list {
     for $uri in xdmp:get-request-field('uri')
     return element uri { $uri } }))
</code></pre>
<p>Even though these examples all use pure XQuery, this code review principle
also applies to XCC code. If you see a Java or .NET program using <code>AdHocQuery</code>,
check to make sure that all user input binds to variables.</p>
<p>Remember, the best time to fix a potential security problem
is <strong>before</strong> the code goes live.</p>]]></content:encoded>
    </item>
    <item>
      <title>rsyslog and MarkLogic</title>
      <link>http://blakeley.com/blogofile/2012/05/17/rsyslog-and-marklogic</link>
      <pubDate>Thu, 17 May 2012 18:00:01 UTC</pubDate>
      <category><![CDATA[MarkLogic]]></category>
      <category><![CDATA[Linux]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2012/05/17/rsyslog-and-marklogic</guid>
      <description>rsyslog and MarkLogic</description>
      <content:encoded><![CDATA[<p>You probably know that MarkLogic Server logs important events
to the <code>ErrorLog.txt</code> file. By default it logs events at <code>INFO</code> or higher,
but many development and staging environments change the <code>file-log-level</code>
to <code>DEBUG</code>. These log levels are also available to the <code>xdmp:log</code> function,
and some of your XQuery code might use that for <code>printf</code>-style debugging.</p>
<p>You might even know that MarkLogic also sends important events
to the operating system. On linux this means <code>syslog</code>, and important events
are those at <code>NOTICE</code> and higher by default.</p>
<p>But are you monitoring these events?</p>
<p>How can you set up your MarkLogic deployment so that it will automatically
alert you to errors, warnings, or other important events?</p>
<p>Most linux deployments now use <code>rsyslog</code> as their system logging facility.
The <a href="http://www.rsyslog.com/doc/manual.html">full documentation</a> is available,
but this brief tutorial will show you how to set up email alerts for MarkLogic
using <code>rsyslog</code> version 4.2.6.</p>
<p>All configuration happens in <code>/etc/rsyslog.conf</code>.
Here is a sample of what we need for email alerts.
First, at the top of the file you should see several <code>ModLoad</code> declarations.
Check for <code>ommail</code> and add it if needed.</p>
<pre><code>$ModLoad ommail.so  # email support
</code></pre>
<p>Next, add a stanza for MarkLogic somewhere after the <code>ModLoad</code> declaration.</p>
<pre><code># MarkLogic
$template MarkLogicSubject,"Problem with MarkLogic on %hostname%"
$template MarkLogicBody,"rsyslog message from MarkLogic:\r\n[%timestamp%] %app-name% %pri-text%:%msg%"
$ActionMailSMTPServer 127.0.0.1
$ActionMailFrom your-address@your-domain
$ActionMailTo your-address@your-domain
$ActionMailSubject MarkLogicSubject
#$ActionExecOnlyOnceEveryInterval 3600
daemon.notice   :ommail:;MarkLogicBody
</code></pre>
<p>Be sure to replace both instances of <code>your-address@your-domain</code>
with an appropriate value. The ActionMailSMTPServer must be smart enough
to deliver email to that address. I used a default <code>sendmail</code> configuration
on the local host, but you might choose to connect to a different host.</p>
<p>Note that I have commented out the <code>ActionExecOnlyOnceEveryInterval</code> option.
The author of <code>rsyslog</code>, <a href="http://www.gerhards.net/rainer">Rainer Gerhards</a>,
recommends setting this value to a reasonably high number of seconds
so that your email inbox is not flooded with messages.
However, the <code>rsyslog</code> documentation states that excess messages
are discarded, and I did not want to lose any important messages.
What I would really like to do is buffer messages for N seconds at a time,
and merge them together in one email.
But while <code>rsyslog</code> has many features, and does offer buffering,
it does not seem to know how to combine consecutive messages
into a single email.</p>
<p>Getting back to what <code>rsyslog</code> <em>can</em> do,
you can customize the subject and body of the mail message.
With the configuration above, a restart of the server
might send you an email like this one:</p>
<pre><code>Subject: Problem with MarkLogic on myhostname.mydomain

rsyslog message from MarkLogic:
[May 17 23:58:36] MarkLogic daemon.notice&lt;29&gt;: Starting MarkLogic Server 5.0-3 i686 in /opt/MarkLogic with data in /var/opt/MarkLogic
</code></pre>
<p>When making any <code>rsyslog</code> changes, be sure to restart the service:</p>
<pre><code>sudo service rsyslog restart
</code></pre>
<p>At the same time, check your system log for any errors or typos.
This is usually <code>/var/log/messages</code> or <code>/var/log/syslog</code>.
The full documentation for <a href="http://www.rsyslog.com/doc/property_replacer.html">template substitution properties
</a> is online.
You can also read about a wealth of other options available in <code>rsyslog</code>.</p>]]></content:encoded>
    </item>
    <item>
      <title>Directory Assistance</title>
      <link>http://blakeley.com/blogofile/2012/03/19/directory-assistance</link>
      <pubDate>Mon, 19 Mar 2012 12:34:56 UTC</pubDate>
      <category><![CDATA[MarkLogic]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2012/03/19/directory-assistance</guid>
      <description>Directory Assistance</description>
      <content:encoded><![CDATA[<p>For a long time now, MarkLogic Server has implemented two distinct features
that are both called "directories". This causes confusion, especially since one
of these features scales well and the other often causes scalability problems.
Let's try to distinguish between these two features,
and talk about why they both exist.</p>
<p>Directories were first introduced to accommodate WebDAV.
Since WebDAV clients treat the database as if it were a filesystem,
they expect document URIs with the solidus, or <code>/</code>,
to imply directory structure. That's one feature called "directories":
if you insert a document with the URI <code>/a/b/c.xml</code>, you can call
<code>xdmp:directory('/a/b/', '1')</code> to select that document -
and any other document with the same URI prefix. These URI prefixes
are indexed in much the same way that document URIs and collection URIs
are indexed, so queries are "searchable" and scale well.</p>
<p>This "implied directory structure" works with any database configuration.
You do not need <code>directory-creation=automatic</code>
to use the <code>cts:directory-query</code> and <code>xdmp:directory</code> functions.</p>
<script src="https://gist.github.com/2127471.js?file=gistfile1.xq"></script>

<p>This returns a query plan in XML:</p>
<script src="https://gist.github.com/2127484.js?file=gistfile1.xml"></script>

<p>But WebDAV clients expect more than just directory listings.
They also want to lock documents and directories.
It is easy to understand document locking: the idea here is that
a WebDAV-aware editor might lock a document, copy it to the local filesystem
for editing, and copy it back to the server when the editing session ends.
It may be less clear that a WebDAV client sometimes needs to lock directories,
but it does.</p>
<p>Directory locking is implemented using special directory fragments.
There are no documents associated with these properties,
so they are sometimes called "naked properties."
Here is an example.</p>
<script src="https://gist.github.com/2127498.js?file=gistfile1.xq"></script>

<p>Once this update has committed to the database,
we can query the directory fragment.</p>
<script src="https://gist.github.com/2127503.js?file=gistfile1.xq"></script>

<script src="https://gist.github.com/2127509.js?file=gistfile1.xml"></script>

<p>Once you have a directory fragment, you have something that the database
can lock for WebDAV clients. It's rare for anything else
to use this behavior, but <code>xdmp:lock-acquire</code> is available for custom
content management applications.</p>
<p>Earlier I mentioned that there are two kinds of "directories",
one that scales well and one that sometimes causes problems.
I wrote that queries based on directory URIs scale well,
so you might guess that directory fragments sometimes cause problems.
That's correct, and it results from a database feature called
"automatic directory creation".</p>
<p>When automatic directory creation is enabled - as it is by default -
the database will ensure that directory fragments exist for every
implied directory in the URI for every new or updated document.
The document URI <code>/a/b/c.xml</code> implies a directory fragment
for <code>/</code>, <code>/a/</code>, and <code>/a/b/</code>. So the database will ensure that these exist
whenever a request updates <code>/a/b/c.xml</code>.</p>
<p>So what happens when one request updates <code>/a/b/c.xml</code>
and another request updates <code>/a/b/d.xml</code>?</p>
<p>Both requests try to ensure that there are directory fragments
for <code>/</code>, <code>/a/</code>, and <code>/a/b/</code>. This causes lock contention.
The same problem shows up if another request is updating <code>/fubar.xml</code>,
because both queries look for the <code>/</code> directory fragment.
The situation gets worse as concurrency increases.
It gets even worse if "maintain directory last-modified" is enabled,
because the directory fragments have to be updated too.
But happily that feature is not enabled by default.</p>
<p>The solution to this problem is simple. In my experience
at least 80% of MarkLogic Server customers do not use WebDAV,
so they do not need automatic directory creation. Instead,
they can set directory creation to "manual".
Do this whenever you create a new database,
or script it using <code>admin:database-set-directory-creation</code>.</p>
<p><img alt="admin UI screen shot" src="/blogofile/images/0109.directory-assistance.admin-UI.png" title="setting directory creation in the admin UI" /></p>
<p>If you do use WebDAV, try to limit its scope. Perhaps you can get by
with a limited number of predefined WebDAV directories,
which you create manually using <code>xdmp:directory-create</code>
as part of your application deployment.
Or perhaps you only use WebDAV for your XQuery modules,
which only contains a few hundred or at most a few thousand documents.
In that case you can use automatic directory creation without a problem.</p>
<p>Generally speaking, really large databases don't use WebDAV anyway.
"Big content" databases, with hundreds of millions or billions of documents,
tend to be much to large for WebDAV to be useful.
For smaller databases where WebDAV is useful,
automatic directory creation is fine.</p>
<p>Sometimes it is useful to set "directory-creation" to "manual-enforced".
With this configuration you will see an <code>XDMP-PARENTDIR</code> error
whenever your code tries to insert a document
with an implied directory structure
that does not have corresponding directory fragments.
But this feature is rarely used.</p>
<p>To sum up, directory URIs are highly scalable and very useful,
and are always indexed. Your code can call <code>xdmp:directory</code>
with any database settings.
The default "automatic directory creation" feature creates directory fragments,
which can be a bottleneck for large databases.
Most applications are better off with "directory-creation" set to "manual".</p>]]></content:encoded>
    </item>
    <item>
      <title>Let-free Style and Streaming</title>
      <link>http://blakeley.com/blogofile/2012/03/19/let-free-style-and-streaming</link>
      <pubDate>Mon, 19 Mar 2012 12:34:56 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2012/03/19/let-free-style-and-streaming</guid>
      <description>Let-free Style and Streaming</description>
      <content:encoded><![CDATA[<p>If you are familiar with Lisp or Scheme, you know that a function call can
replace a variable binding, and function calls can also replace most loops.
This is also true in XQuery.</p>
<script src="https://gist.github.com/2127325.js?file=gistfile1.xq"></script>

<script src="https://gist.github.com/2127351.js?file=gistfile1.txt"></script>

<p>In XQuery this leads to a style of coding that I call "let-free".
In this style, there are no FLWOR expressions.
Really this is "FLWOR-free", not "let-free",
but that's too much of a mouthful for me.</p>
<p>But why would you write let-free code?</p>
<p>The answer is scalability - you knew it would be, right?
This breaks out into concurrency and streaming.
Let's talk about concurrency first.
In the MarkLogic Server implementation of XQuery,
every <code>let</code> is evaluated in sequence. However, other expressions
are evaluated lazily with concurrency-friendly "future values".
So a performance-critical single-threaded request can sometimes
benefit from let-free style. You can see this technique in use
in some of my code:
the <a href="https://github.com/marklogic/semantic">semantic library</a>
or the <a href="https://github.com/mblakele/task-rebalancer">task-server forest rebalancer</a>.
Both of these projects try to benefit from multi-core CPUs.</p>
<p>The let-free style can also help with query scalability
by allowing the results to stream,
rather than buffering the entire result sequence.
If you need to export large result sets, for example,
this technique can help avoid <code>XDMP-EXPNTREECACHEFULL</code> errors.
Those errors result when your query's working set is too large
to fit in the expanded tree cache, a sort of scratch space for XML trees.
But streaming results don't have to fit into the cache.</p>
<p>For example, let's suppose you need to list every document URI in the database.
But you do not have the URI lexicon enabled,
and you cannot reindex to create it.</p>
<script src="https://gist.github.com/2127363.js?file=gistfile1.xq"></script>

<script src="https://gist.github.com/2127371.js?file=gistfile1.xq"></script>

<p>Note that nested evaluations cannot stream, either. So even a let-free query
may throw XDMP-EXPNTREECACHEFULL in cq or another development tool.
To test this query, use an http module instead.
This is ideal for web service implementations too.</p>
<p>In this example we used function mapping, a MarkLogic extension to XQuery 1.0.
If a function takes a single argument but is called using a sequence,
the evaluator simply maps the sequence to multiple function calls.
This is somewhat faster than a FLWOR, and it can stream.</p>
<p>Besides using function mapping, let-free style can use XPath steps.
However, this technique only works for sequences of nodes.</p>
<script src="https://gist.github.com/2127388.js?file=gistfile1.xq"></script>

<p>While these techniques are useful, they can make for code that is
hard to read and tricky to debug. Function mapping is especially prone to errors
that are difficult to diagnose. If a function signature specifies an argument
without a quantifier or with the <code>+</code> quantifier,
and the runtime argument is empty, the function will not be called at all.
This is surprising, since normally the function would be called
and would cause a strong typing error.</p>
<script src="https://gist.github.com/2127394.js?file=gistfile1.xq"></script>

<script src="https://gist.github.com/2127403.js?file=gistfile1.xq"></script>

<p>The first expression returns the empty sequence,
while the second throws the expected strong typing error <code>XDMP-AS</code>.
This behavior is annoying, but in some applications
the benefits of function mapping outweigh this drawback.
We can make debugging easier if we weaken the function signature
to <code>document-node()?</code> so that the function will be called
even when the argument is empty. If needed, we can include an explicit check
for empty input too.</p>
<p>Another let-free trick is to use module variables.
These act much like <code>let</code> bindings, but they can stream.</p>
<script src="https://gist.github.com/2127416.js?file=gistfile1.xq"></script>

<p>This example is a bit contrived, since the module variable doesn't add anything.
But if you find yourself struggling to refactor a <code>let</code> as a function call
or an XPath step, consider using a module variable.
Module variables are also excellent tools for avoiding repeated work,
since the right-hand expression is evaluated lazily and is never
evaluated more than once. If the evaluation does not use the module variable,
then the right-hand expression is never evaluated.
In contrast, the right-expression of a <code>let</code> is evaluated
even when the <code>return</code> does not use its value.</p>
<p>As always, do not optimize code unless there is a problem to solve.
There are also some situations where the let-free style isn't appropriate.
Aside from making your code harder to read and more difficult to debug,
let-free style simply doesn't work in situations where your FLWOR
would have an <code>order by</code> clause.
And after all, streaming won't work for that case anyway.
The evaluator can't sort the result set without buffering it first.</p>]]></content:encoded>
    </item>
    <item>
      <title>Conditional Profiling for MarkLogic</title>
      <link>http://blakeley.com/blogofile/2011/12/14/conditional-profiling-for-marklogic</link>
      <pubDate>Wed, 14 Dec 2011 15:16:17 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid isPermaLink="true">http://blakeley.com/blogofile/2011/12/14/conditional-profiling-for-marklogic</guid>
      <description>Conditional Profiling for MarkLogic</description>
      <content:encoded><![CDATA[<p>Today I pushed <a href="https://github.com/mblakele/cprof">cprof</a> to GitHub.
This XQuery library helps application developers
who need to retrofit existing applications with profiling capabilities.
Just replace all your existing calls to
<code>xdmp:eval</code>, <code>xdmp:invoke</code>, <code>xdmp:value</code>,
<code>xdmp:xslt-eval</code>, and <code>xdmp:xslt-eval</code> with corresponding <code>cprof:</code> calls.
Add a little logic around <code>cprof:enable</code> and <code>cprof:report</code>, and you are done.</p>]]></content:encoded>
    </item>
    <item>
      <title>Before you upgrade to 5.0-1</title>
      <link>http://blakeley.com/blogofile/archives/599</link>
      <pubDate>Thu, 03 Nov 2011 08:47:15 UTC</pubDate>
      <category><![CDATA[MarkLogic]]></category>
      <guid>http://blakeley.com/blogofile/archives/599</guid>
      <description>Before you upgrade to 5.0-1</description>
      <content:encoded><![CDATA[
Thinking about upgrading to <a href="http://developer.marklogic.com/">MarkLogic Server 5.0-1</a>?
<br/><br/>
As usual, back up everything. I haven't seen any data loss myself, but it is your data so be careful.
<br/><br/>
If you have made any changes to Docs (port 8000) or App Services (8002), the app-services portion of the upgrade won't happen (but the rest of the server will be fine). If you want to use the new monitoring services, you want that part of the upgrade to happen.
<br/><br/>
The fix is to revert your changes to ports 8000 and 8002. If you have repurposed either port for <a href="http://github.com/marklogic/cq/">cq</a>, you may want to go into cq and export all any *local* sessions before changing anything. Local sessions in cq are tied to local browser storage, which is tied to host and port, so you will lose access to them if you change the cq port. Not many folks seem to use cq's local sessions, but I thought I'd mention it. Whether you use cq on those ports or not, make sure port 8000 has root <code>Docs/</code> and 8002 has root <code>Apps/</code> or <code>Apps/appbuilder/</code> - you can see these checks in <code>Admin/lib/upgrade.xqy</code>, function <code>check-prereqs-50</code>.
<br/><br/>
If <code>upgrade.xqy</code> decides not to upgrade your App Services configuration, it will log a message "Skipping appservices upgrades, prerequisites not met." at level "error". The rest of the server will function correctly, but you won't get the appservices part of 5.0.<br/><br/>
]]></content:encoded>
    </item>
    <item>
      <title>Rebalancing for CoRB</title>
      <link>http://blakeley.com/blogofile/archives/597</link>
      <pubDate>Tue, 01 Nov 2011 20:50:34 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid>http://blakeley.com/blogofile/archives/597</guid>
      <description>Rebalancing for CoRB</description>
      <content:encoded><![CDATA[
I've written some quick scripts for <a href="https://github.com/mblakele/corb-rebalancer">rebalancing forests in a MarkLogic Server database</a>. This leverages CoRB, and makes the job fairly simple. So if you add more forests to a database, and don't have the luxury of clearing and reloading, I hope this code will help.<br/><br/>
]]></content:encoded>
    </item>
    <item>
      <title>MarkLogic 5.0 - First Look</title>
      <link>http://blakeley.com/blogofile/archives/577</link>
      <pubDate>Tue, 01 Nov 2011 12:24:23 UTC</pubDate>
      <category><![CDATA[XQuery]]></category>
      <category><![CDATA[MarkLogic]]></category>
      <guid>http://blakeley.com/blogofile/archives/577</guid>
      <description>MarkLogic 5.0 - First Look</description>
      <content:encoded><![CDATA[
In case you have missed the news, <a href="http://developer.marklogic.com/download">MarkLogic Server 5.0-1</a> is now available. The upgrade went smoothly for me, but this is a major release so it is wise to back up your databases and configuration before upgrading. The on-disk forest version appears to have changed, which will trigger reindexing of all forests. You may want to manually disable reindexing before upgrading, so that you don't have to contend with multiple forests trying to reindex at the same time.
<br/><br/>
This is also a good time to double-check your free disk space, since reindexing uses extra disk space. Some of that space won't be released when reindexing finishes, either. For example, one of my forests looked like this:
<br/><br/>
<div>
<a href="/blogofile/images/wp-content/2011/11/Screen-shot-2011-11-01-at-10.15.11-.png"><img class="size-medium wp-image-580" title="Forest status after reindexing" src="/blogofile/images/wp-content/2011/11/Screen-shot-2011-11-01-at-10.15.11--300x47.png" alt="This forest is holding on to over 2-GiB of deleted fragments." width="300" height="47" /></a>
</div>
<br/><br/>
You can purge those deleted fragments by forcing a merge of the forest, or of the entire database. After doing this, my forest used less disk space.
<br/><br/>
<div>
<a href="/blogofile/images/wp-content/2011/11/Screen-shot-2011-11-01-at-10.24.34-.png"><img class="size-medium wp-image-580" title="Forest status after forced merge" src="/blogofile/images/wp-content/2011/11/Screen-shot-2011-11-01-at-10.24.34--300x32.png" alt="After the forced merge, the deleted fragments are gone and the forest is smaller." width="300" height="32" /></a>
</div>
<br/><br/>
This new release is stricter about unquoted attributes. With previous releases this would generally work, even though the <a href="http://www.w3.org/TR/xquery/#doc-xquery-DirectConstructor">XQuery 1.0 Recommendation</a> requires quoted attribute values:
<p style="padding-left: 30px;"><span style="font-family: monospace;">&lt;test a={xdmp:random()}/&gt;</span></p>
<br/><br/>
Now it throws an <code>XDMP-UNEXPECTED</code> error. Quote the attribute value correctly, and the problem is fixed.
<p style="padding-left: 30px;"><span style="font-family: monospace;">&lt;test a="{xdmp:random()}"/&gt;</span></p>
<br/><br/>
I'm looking forward to learning more about the 5.0 release, but so far it looks good.<br/><br/>
]]></content:encoded>
    </item>
  </channel>
</rss>
