One of the system processes provided in SFG is FileGatewayMailboxRoute. It’s typically called from Mailbox Routing Rules in B2Bi. The system mailbox schedule evaluates these rules and creates a list of any new messages since the last run. For advanced users, this is effectively a test on the MBX_NEEDS_ROUTING table: anything that matches a rule triggers the requisite process.
The standard process retrieves a list of mailbox message IDs and does the bare minimum: it invokes the child process FileGatewayMailboxRouteArrivedFile with the message_id.
There’s also a tuning parameter that controls the maximum number of messages sent to any routing process.
“So what’s the issue?” It’s a simple process with a loop—and you can control that loop. But simplicity can hide cost.
When tuning B2Bi performance, always split thinking into what you can control vs what you can’t.
Looping in B2Bi is effectively a while/for-each pattern: while there are more messages, do the next one. ProcessData carries parameters and state: if you start with x messages, you carry a list of x items for at least x steps—often x × 3 even with low persistence. You may also be storing called-process history. The net effect: payload growth. Turning persistence down doesn’t remove the in-memory dead weight—you still pay for it.
Running many concurrent instances doesn’t automatically help. It loads the system, and because FIFO isn’t guaranteed, timely delivery suffers.
Operationally: would you rather have 9,500 messages parked in the mailbox with 500 in flight, or 10,000 all in flight? The latter often happens in reality, but reliable tuning requires control—aim for the former.
Point: a very simple process, handled with critical thinking, can make a huge difference.
Trick 1 — Shrinking the Payload as You Go
First, work out a safe limit—what’s your real bottleneck? Payload is one. A simple optimisation: process the first item, then release it rather than maintaining a big counter-driven list.
(While you’re at it, clean up any other litter you don’t need.)
This makes the process lighter on each loop—simple and effective.
Another approach is to use a classic EDI batching trick: Document Extraction.
Turning the contents of ProcessData into a document means the list becomes a static link to a cache rather than a heavy in-memory list. As with most EDI patterns, it’s tried and trusted.
Create the document with DOMToDoc or XML Encoder:
Then release the dead weight from ProcessData immediately.
Split with Document Extraction (yes, it’s multi-use, so a bit verbose):
This uses pointers into the extract to position the loop and creates a new PrimaryDocument for each MessageId. You can grab it via:
Yes, each loop creates a new tiny document (maintenance overhead), but remember: ProcessData also persists. There’s no free lunch—choose the trade-off that best suits speed vs. storage.
Final Control — A Message Ceiling with Smart Batching
You need a ceiling (parameterised in the real world). You can’t control when a spike hits, so plan for the worst. What to do with MessageIds above the threshold? Put them back in the queue—here, that’s MBX_NEEDS_ROUTING.
Avoid single-row inserts inside a tight loop. Batch them.
Oracle pattern
SQL Server equivalent:
Net effect: same outcome, far fewer round trips.
Important Deployment Notes
Full BPML (Reference)