Saturday, October 26, 2019

Mule Integration - Write Elegant Code

Introduction

Recently, I encountered the following piece of Mule Dataweave 2.0 function code.

fun getEmployeePovince(worker) = getStateAbbrevation(worker) match {
     case "AB" -> "AB"
     case "BC" -> "BC"
     case "MB" -> "MB"
     case "NB" -> "NB"
     case "NL" -> "NL"
     case "NS" -> "NS"
     case "NT" -> "NT"
     case "NU" -> "NU"
     case "ON" -> "ON"
     case "PE" -> "PE"
     case "QC" -> "QC"
     case "SK" -> "SK"
     case "YT" -> "YT"
     else -> "ZZ"
    }

The purpose of the above function is to get home province's code of a Canadian employee. The Canadian province code is like, BC for British Columbia, ON for Ontario, etc. The input is an xml object which contains information about the employee. If the worker's province code is not among the list, default to "ZZ".

The above code works and about to go to production. However, this kind of code is really not very cool to say the least. It is just an amateur's code!

Improvement One

First of all, the gist of this kind of problem is to find a match from a given list of Strings. Mule Dataweave 2.0 provides a function, namely find. The documentation can be found here.. The find function works like the following:
['aa', 'bb', 'cc'] find 'xy'  //return [], empty array
['aa', 'bb', 'cc', 'aa'] find 'aa' //return [0, 3]
Now the code using find function should be clear. If the return array from the find is empty (isEmpty(findFunction)), use the input, otherwise, use default, "ZZ".

The following is the datawave code:

%dw 2.0
var CanadianProvinces = ["BC", "MB", "NB", "NL", "NS", "NT", "NU", "NT", "NU", "ON", "PE", "QC", "SK", "YT"]
var pv = payload.state
fun findAKey(aKey) = CanadianProvinces find aKey

output application/json
---
{
 province: if(isEmpty(findAKey(pv))) "ZZ" else pv,
}
The above code is much cleaner. We put the constants of the province code into an array, and use find function. However the above code is still not good for maintenance. Let's see if I want to use the same to logic to US state's code. We have to modify the dataweave code. We can further improve the code. That is to put the constant into property file.

Improvement Two

First, we put the two constants to the yaml property file as the following:
provinces: ["BC", "MB", "NB", "NL", "NS", "NT", "NU", "NT", "NU", "ON", "PE", "QC", "SK", "YT"]
constants:
   default: "ZZ"

%dw 2.0
var provinces = p('provinces')
var pv = payload.state
var dp = p('constants.default')
fun findAKey(aKey) = provinces find aKey

output application/json
---
{
 provice: if(isEmpty(findAKey(pv))) dp else pv,
}

Take Aways

  1. Two dataweave 2.0 functions: switch and find
  2. Array constants in yaml property file
  3. When we encounter some strange code, we should think about the improvement. There is always ways to write elegant code.

Tuesday, September 3, 2019

Install JSON Plugin In Anypoint Studio

Introduction

JSON Plugin is very useful tool for editing and verifying JSON schemas or json examples data in our API deployment. Unfortunately, it does not come with Anypoint Studio. To me, AnypointStudio should more or less behave like Eclipse which allow us to drag and paste any available plugin. This short article describe the procedure to install the plugin.

Installation of JSON Plugin

First download the plugin zip file from this website to ~/Download. The file name should be like:

rw-r--r--@   1 gl17  staff   112K Sep  3 14:38 jsonedit-repository-0.9.7.zip

Second, in AnypointStudio, Help --> Install New Software --> Add

Note, select Achive and from local file.

After installation, restart the AnypointStudion. Now if you open any json file, the Studio will validate the file.

Saturday, August 24, 2019

Manage Mule Runtime Using Linux Services On RHEL 7

Introduction

This article describes the procedures to enable Mule runtime as systemd service for Mule standalone runtime clustering on-premises or in private clouds. Since RHEL 7, the systemd init system is a must. The traditional init.d approach is obsolete. For details about the systemd and Unit files, you may refer to this article.

Assumptions

  • Mule standalone runtime is installed at /opt/mule/runtime/current
  • Mule runtime can be started and stoped by the command /opt/mule/runtime/current/bin/mule start | stop
  • mule user has sudo permission

Create Unit File

First, we need to create a file, mule.service at /etc/systemd/system, with the following contents:
# file: /etc/systemd/system/mule.service
# Systemd unit file for mule standalone runtime
[Unit]
Description=Mule Runtime Standalone Runtime
After=syslog.target network.target

[Service]
Type=forking
WorkingDirectory=/opt/mule/runtime/current
Environment=JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64
Environment=MULE_HOME=/opt/mule/runtime/current
TasksMax=infinity
LimitNOFILE=65335

ExecStart=/opt/mule/runtime/current/bin/mule start
ExecStop=/opt/mule/runtime/current/bin/mule stop

User=mule
Group=mule

RestartSec=10
Restart=always

[Install]
WantedBy=multi-user.target

$ cd /etc/systemd/system/
$ sudo chmod 644 mule.service
$ sudo systemctl daemon-reload
$ sudo systemctl enable mule.service
The above command will enable the mule.service. This procedure simply creates a line in the folder of: /etc/systemd/system/system-update.target.wants
systemd-readahead-drop.service -> /usr/lib/systemd/system/systemd-readahead-drop.service
Now, we are ready to start the mule runtime as a service. To do this, we must first stop the running Mule runtime:
$ cd /opt/mule/runtime/current
$ bin/mule stop
Now, run the following command:
$ sudo systemctl start mule.service
It may take sometime before we get the prompt back. After that we can check whether the mule runtime is running or not by the following command:
$ sudo systemctl status mule.service
● mule.service - Mule Runtime Standalone Runtime
   Loaded: loaded (/etc/systemd/system/mule.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2019-08-24 15:46:04 CDT; 2h 45min ago
 Main PID: 17555 (wrapper-linux-x)
   CGroup: /system.slice/mule.service
           ├─17555 /opt/mule/runtime/current/lib/boot/exec/wrapper-linux-x86-64 /opt/mule/runtime/current/conf/wrapper.conf wrapper.s...
           └─17569 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64/jre/bin/java -Dmule.home=/opt/mule/runtime/current -D...

Aug 24 15:45:41 wp37mulerte01.aci.awscloud systemd[1]: Starting Mule Runtime Standalone Runtime...
Aug 24 15:45:41 wp37mulerte01.aci.awscloud mule[17439]: MULE_HOME is set to /opt/mule/runtime/current
Aug 24 15:45:41 wp37mulerte01.aci.awscloud mule[17439]: MULE_BASE is set to /opt/mule/runtime/current
Aug 24 15:45:42 wp37mulerte01.aci.awscloud mule[17439]: Starting Mule Enterprise Edition...
Aug 24 15:46:04 wp37mulerte01.aci.awscloud mule[17439]: Waiting for Mule Enterprise Edition.......................
Aug 24 15:46:04 wp37mulerte01.aci.awscloud mule[17439]: running: PID:17555
Aug 24 15:46:04 wp37mulerte01.aci.awscloud systemd[1]: Started Mule Runtime Standalone Runtime.
As you can see, the mule runtime is running. If you want to know the deatils about the mule runtime, you can use the following command:
$ sudo systemctl -l status mule.service
The -l option will print out the arguments passed to the JVM. To view the history of stop/start of the Mule runtime, we can use the following command:
$ sudo journalctl -u mule.service
-- Logs begin at Tue 2019-08-06 23:11:10 CDT, end at Sat 2019-08-24 18:39:13 CDT. --
Aug 22 21:40:03 wd35mulerte01.aci.awscloud systemd[1]: Starting Mule Runtime Standalone Runtime...
Aug 22 21:40:03 wd35mulerte01.aci.awscloud mule[31293]: MULE_HOME is set to /opt/mule/runtime/current
Aug 22 21:40:03 wd35mulerte01.aci.awscloud mule[31293]: MULE_BASE is set to /opt/mule/runtime/current
Aug 22 21:40:05 wd35mulerte01.aci.awscloud systemd[1]: mule.service: control process exited, code=exited status=1
Aug 22 21:40:05 wd35mulerte01.aci.awscloud systemd[1]: Failed to start Mule Runtime Standalone Runtime.
Aug 22 21:40:05 wd35mulerte01.aci.awscloud systemd[1]: Unit mule.service entered failed state.
Aug 22 21:40:05 wd35mulerte01.aci.awscloud systemd[1]: mule.service failed.
Aug 22 21:40:14 wd35mulerte01.aci.awscloud systemd[1]: Stopped Mule Runtime Standalone Runtime.
Aug 22 21:52:39 wd35mulerte01.aci.awscloud systemd[1]: Starting Mule Runtime Standalone Runtime...
Aug 22 21:52:39 wd35mulerte01.aci.awscloud mule[32610]: MULE_HOME is set to /opt/mule/runtime/current
Aug 22 21:52:39 wd35mulerte01.aci.awscloud mule[32610]: MULE_BASE is set to /opt/mule/runtime/current
Aug 22 21:52:40 wd35mulerte01.aci.awscloud mule[32610]: Starting Mule Enterprise Edition...
Aug 22 21:53:03 wd35mulerte01.aci.awscloud mule[32610]: Waiting for Mule Enterprise Edition.........................
Aug 22 21:53:04 wd35mulerte01.aci.awscloud mule[32610]: running: PID:32750
Aug 22 21:53:04 wd35mulerte01.aci.awscloud systemd[1]: Started Mule Runtime Standalone Runtime.
Aug 22 21:54:15 wd35mulerte01.aci.awscloud systemd[1]: Stopping Mule Runtime Standalone Runtime...
Aug 22 21:54:15 wd35mulerte01.aci.awscloud mule[633]: MULE_HOME is set to /opt/mule/runtime/current
Aug 22 21:54:15 wd35mulerte01.aci.awscloud mule[633]: MULE_BASE is set to /opt/mule/runtime/current
Aug 22 21:54:15 wd35mulerte01.aci.awscloud mule[633]: Stopping Mule Enterprise Edition...
Aug 22 21:54:18 wd35mulerte01.aci.awscloud systemd[1]: Stopped Mule Runtime Standalone Runtime.
Aug 22 21:55:43 wd35mulerte01.aci.awscloud systemd[1]: Starting Mule Runtime Standalone Runtime...
Aug 22 21:55:44 wd35mulerte01.aci.awscloud mule[912]: MULE_HOME is set to /opt/mule/runtime/current
Aug 22 21:55:44 wd35mulerte01.aci.awscloud mule[912]: MULE_BASE is set to /opt/mule/runtime/current
Aug 22 21:55:45 wd35mulerte01.aci.awscloud mule[912]: Starting Mule Enterprise Edition...
Aug 22 21:56:08 wd35mulerte01.aci.awscloud mule[912]: Waiting for Mule Enterprise Edition.........................
Aug 22 21:56:08 wd35mulerte01.aci.awscloud mule[912]: running: PID:1091
Aug 22 21:56:08 wd35mulerte01.aci.awscloud systemd[1]: Started Mule Runtime Standalone Runtime.

That is it. It is very helpful to study the commands of systemctl and journalctl.

Sunday, August 18, 2019

Two-Way SSL In Mule Application - Part 2

Introduction

In my previous article in DZone or here, I omitted the procedure to create a trust store for the application. This is important if the applications are deployed to the CloudHub.

In this article, I will describe the procedures to create the trust store and how to configure the HTTPS request for Mule application

Create A Trust Store FOR MULE HTTPS Request

The procedures to import the server's PEM certificate to a trust store are the following.

First, we will create a trust store using the following command:

keytool -genkey -keyalg RSA -alias cyberark-poc -keystore truststore.ks
Enter anything. They are not important as we will delete it.

Second, delete the content of the trust store just created:

keytool -delete -alias cyberark-poc -keystore truststore.ks
Third, import the server's certificate:
keytool -import -v -trustcacerts -alias cyberark-server -file SERVER-CERT.pem -keystore truststore.ks
Now, copy the truststore.ks to Mule application project /src/main/resources

HTTPS Request Configuration

The following is the complete HTTPS Request configuration:
 
  
   
    
    
   
  
 
Note: I put both client.pfx and truststore in the directory of /src/main/resources. You may put them into different directory. In that case, you need to give the full path relative to the Mule Application Project, such as ssh/cert/client.pfx.

The Key Takeaways

The best practice for certificates manipulation is:
  1. If the deployment is on-prem, import servers' certificates to cacert. In this way if the server's certificate is expired, we just need to reimport, not code change is required.
  2. If the deployment is CloudHub, we have to import the servers' certificate to a truststore as described in this article.
  3. Use JKS format for the trust store used in the HTTPS request. It is most popular one.

Sunday, August 11, 2019

Two-Way SSL In Mule Application

Introduction

In my previous article, I have explained how Two-Way SSL works with the context of Mule Application. Many people have asked the question about how to setup HTTPS request in Mule application. This article provide the details about the procedures to invoke HTTPS services which require Two-Way SSL or Mutual Authentication. Before we dive into the detail procedures, lets review how Two-Way SLL works between clients and servers.

The gist of Two-Way SSL is to exchange certificates between clients and servers. The details are pretty complicated and they beyond the scope of this article. Basically, here are the high level scheme of the exchange of certificates:
  1. Client send a ClientHello message to a server
  2. Server replies with ServerHello, Server's certificate, and Request for Client's certificate
  3. Client its certificate other information like cipher scheme, server's certificate verification, etc.
  4. Server replies with cipher scheme.
  5. Start to exchange information
Now, how do we setup Mule Application as client?

Client's Certificate Generation

In general, IT admin will generate client certificates similar as I describe in my blog here Let's assume that is the way for now so that we can describe how to setup Mule HTTPS Request. Before we continue, we need to obtain server's certificate in advance. The certificate can be in many forms like JKS, PKCS12, PEM, etc. Mule HTTPS request support three forms:
  • JKS
  • PKCS12
  • JCEKS
Let's say if we got PEM format from the server. We need to do one of the two things depending on the deployment pattern.
  • if it is on-prem deployment, the best way is to import the cert to JVM cacerts
  • if it is deployed to MuleSoft CloudHub, we need to convert the PEM to PKCS12.
If it is on-prem deplopment, we can import the the PEM certificate directly into cacerts here is the procedure (Make sure you have sudo permission, and server's cert is named like SERVER_CERT.pem)
cd ${JAVA_HOME}/jre/lib/security
cp SERVER_CERT.pem
sudo keytool -import -alias mule1-cyberark -keystore cacerts -file SERVER_CERT.pem
To be sure that server's cert is in pem format, you can use the following command:
$ openssl x509 -in SERVER_CERT.pem -text
If it is CloudHub deployment, we need to convert the pem file to PKCS12 format. Here is the command:
$ openssl pkcs12 -export -nokeys -in SERVER_CERT.pem -out SERVER_CERT.pfx

Note the option of "-nokeys". This means I do not have the private key of the certificate. Now we have server's certificates being taken care of. We need to convert the client's certificate to PKCS12. Here is the command to do so:

 openssl pkcs12 -export -in cacert.pem -inkey cakey.pem -out identity.p12 -name "mykey"

Note the above procedure will ask the password. Make sure you remember it.

Setup Mule Flow

The following diagram shows the simple Mule flow
The https request configuration is the following:

 
  
   
    
   
  
 
 
The import point here is that client's certificate is

and server's certificates is

Friday, August 9, 2019

How To Pass MuleSoft Certified Developer - Level 1 (Mule 4)

Congratulation Gary!

First of all, I must congratulate myself for getting this done. For almost a year, I have been thinking to take the exam, but my project schedules have been crazy. I hardly find time to prepare the certification. Three weeks ago, I decided to give a shot. I studied two weekends, and spend about 1 hour each day. And today, I did! I must say that I feel it is a kind of accomplishment. This test is not easy!
Here I will try to summarized my feeling about the test and how I prepared the test. Hopefully, it will provided some help to those who are thinking to take the test.

The Procedures

First of all, go to Mule training website and paid 250 USD online and schedule a test at the same time. I did my test at my local test center. There, I have to lock away my cell phone, watch, wallet, even my hat! It is very quiet and comfortable place. Not many people there either.

The exam is 2 hours long. That is plenty time to ponder each question carefully. At the first 10 minutes, MuleSoft did a survey about my experience in Mule, how I prepared the test, what role I play, etc. I am not sure why they should ask these questions at all. One thing really makes me suspicious is that if you are a very experience developer, you may get harder questions. It is just my guess!

How Was My Test Results?

It took me 88 minutes to submit the answer. I know I had plenty time, so that I read each question very carefully and did not plan to review them once I am done with all the questions.

The following are the results I got:

Creating Application Networks: 100.00%

Designing APIs: 100.00%

Building API Implementation Interfaces: 100.00%

Deploying and Managing APIs and Integrations: 75.00%

Accessing and Modifying Mule Events: 83.33%

Structuring Mule Applications: 66.66%

Routing Events: 80.00%

Handling Errors: 80.00%

Troubleshooting and Testing Mule Applications: 66.66%

Writing DataWeave Transformations: 100.00%

Using Connectors: 83.33%

Processing Records: 100.00%

Result: PASS
Roughly, my overall score is about 89%. I think I did pretty OK given that I did not have enough time to prepare. I have no idea why my trouble shooting score is only 66.66%. I thought this is my strongest area.

How Do I Feel About The Questions?

Overall, I think the questions are very good, but many of them are very hard to answer with confidence. About 15% of the questions are really hard.

The challenges come from several fronts. Firstly, most of the questions are very long. You must read the question at least two times before you answer the question. This really tests your English as the questions are very tricky. Secondly, most questions are really difficult to 100% sure from the first glance. You have to read them very carefully. Sometimes the brackets, comma, and semicolon make the difference. Thirdly, some questions are rare to encounter in real life, such as exporting artifacts from Anypoint Studio. We normally don't do this. All in all, I think MuleSoft training department did a good job. It seems the Mule 4 MCD is a bit harder the Mule 3 one.

How I Prepared My Test

I really don't have the time to go through all the training materials. Definitely no time to go through all the DIY.

Here is the procedure I took. I think if you are experience developer and want to pass the test at the first shot. You can follow my way.

  1. Go to the final Quiz at the last Module of the training material, and try to answer them. They are pretty difficult actually. Many of them, I really had trouble to answer at the first time. The good news is that for each question, if you did not answer correctly, the website tell you the right answer. Thus you can figure out why.
  2. Quickly go through each chapter and all the slides. Notes down what the chapter is about.
  3. After the first two steps, I start to create many mini-code to really understand the details about connectors, error handling, different scopes, etc.
  4. Take notes. Take a lot of notes. These notes really help me to remember the nitty-mitty details. Remember, to pass the test, you must pay attention to tiny details.

Some Thoughts About The Certification

Definitely, it worths the time and energy to prepare and take the Exam. It really lets us to start paying attention to details and try to think about the reason about the way MuleSoft implements connectors, design patterns, and other systems. It is not for beginner anyway.

Don't take the delta test. To me, it is really better to challenge ourselves and take Mule 4 MCD.

The questions have a lot of room to improve.

  • It should really focus on how our daily development works.
  • Questions should more focus on development of Mule flows, less memorization of syntaxes.
  • More questions on design and troubleshooting.
  • More questions on problem solving skills.

Tuesday, August 6, 2019

Mule 4: Enable HTTPS Connector Using openssl

Introduction

This article demonstrate the procedures using openssl to generate self-signed certificates, and how to use the private key to configure HTTPS connector.

Generate Private Key And Public Cert Using openssl

$ openssl req -newkey rsa:2048 -x509 -keyout cakey.pem -out cacert.pem -days 3650
Generating a RSA private key
....+++++
...................................................+++++
writing new private key to 'cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:Texas
Locality Name (eg, city) [Default City]:Dallas
Organization Name (eg, company) [Default Company Ltd]:GGL Consulting Inc
Organizational Unit Name (eg, section) []:EA
Common Name (eg, your name or your server's hostname) []:Gary Liu
Email Address []:gary.liu1119@gmail.com
The above command will generate two files:
  1. cakey.pem
  2. cacert.pem
Mulesoft HTTPS TLS configuration support 3 format:
  1. JKS -- Java Keystore
  2. PKCS12 -- for details refer this page
  3. JCEKS -- Stands for Java Cryptography Extension KeyStore
We need to convert the RAS format to PKCS12 using the following command:
$ openssl pkcs12 -export -in cacert.pem -inkey cakey.pem -out identity.p12 -name "mykey"
Enter pass phrase for cakey.pem:
Enter Export Password:
Verifying - Enter Export Password:
The above command generate a file namely: identity.p12 with the alias mykey. Now we can configure the HTTPS Connector.

Configure HTTPS Connector

The xml configuration will be like the following:

		
			
				
			
		
	
The follow snapshots show the procedures using Anypoint Studio:

Invoke The Service

To test the service we can use the following curl command:
$ curl -k -XGET https://localhost/helloworld
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    31  100    31    0     0     31      0  0:00:01  0:00:01 --:--:--    29
{
  "message": "Hello, World"
}
Note -k option is to tell curl to accepted self-signed certificates.

Sunday, August 4, 2019

Dataweave Tricks: Extract Keys and Values From HashMap

Introduction

I have a requirement to extract the keys and values from a dataset like the following:
{
  "account1" : {
      "accountID" : "1234",
      "name" : "Mary Loo",
      "balance" : 234.32
     },
  "account2" : {
      "accountID" : "1234",
      "name" : "Lauren Flor",
      "balance" : 234.32
     },
  "account3" : {
      "accountID" : "1234",
      "name" : "Mary Loo",
      "balance" : 234.32
     }         
}
Apparently, the above data is a Map. Now we need to produce two dataset of values and keys from the input (Map) as the following:
[
    {
        "accountID": "1234",
        "name": "Mary Loo",
        "balance": 234.32
    },
    {
        "accountID": "1234",
        "name": "Lauren Flor",
        "balance": 234.32
    },
    {
        "accountID": "1234",
        "name": "Mary Loo",
        "balance": 234.32
    }
]
and
[
    "account1",
    "account2",
    "account3"
]

Understanding Dataweave pluck Function

Dataweave has devised a function particularly for this kind of requirement. Here is the solution to extract values of a HashMap
%dw 2.0
output application/json
---
payload pluck (item, key, index) -> item
The shorthand version:
%dw 2.0
output application/json
---
payload pluck $
And to extract the keys:
%dw 2.0
output application/json
---
payload pluck (item, key, index) -> key
The shorthand version:
%dw 2.0
output application/json
---
payload pluck $$

Saturday, August 3, 2019

Dataweave 2.0 Tricks: Sorting and Grouping

The Challenges

I have Accounts retrieved from Salesforce like the following:

[
    {
        "LastModifiedDate": "2015-12-09T21:29:01.000Z",
        "Id": "0016100000Kngh3AAB",
        "type": "Account",
        "Name": "AAA Inc."
    },
    {
        "LastModifiedDate": "2015-12-09T20:16:47.000Z",
        "Id": "0016100000KnXKhAAN",
        "type": "Account",
        "Name": "AAA Inc."
    },
    {
        "LastModifiedDate": "2015-12-12T02:06:48.000Z",
        "Id": "0016100000KqonvAAB",
        "type": "Account",
        "Name": "AAA Inc."
    },
...
]
The dataset contains many accounts which have the same name. These accounts with the same account are regarded as duplicates Eventually I want to delete the duplicates and just leave one in the SFDC. Before I delete the duplicates, I need to create an output for review like the following:
{
    "AAA Inc.": [
        "0016100000Kngh3AAB",
        "0016100000KnXKhAAN",
        "0016100000KqonvAAB",
        "0016100000KnggyAAB",
        "0016100000KngflAAB",
        "0016100000KqalVAAR",
        "0016100000Kngh8AAB",
        "0016100000KnVUKAA3",
        "0016100000Kngh5AAB",
        "0016100000KnVXdAAN",
        "0016100000KnVh4AAF",
        "0016100000KnVs6AAF",
        "0016100000KnggAAAR",
        "0016100000KnlokAAB",
        "0016100000KnggKAAR"
    ],
    "Adam Smith": [
        "0016100000L7sDjAAJ"
    ],
    "Alice John Smith": [
        "0016100000L7x29AAB"
    ],
    "Alice Smith.": [
        "0016100000L7sDiAAJ"
    ],
...

Solutions

I device a two-stage solution. The first transform will create LinkedHashMap which will contain account name as key and the value as array of Account as shown below:
%dw 2.0
output application/java
---
//payload groupBy $.Name orderBy $$
(payload groupBy (account) -> account.Name)  orderBy (item, key) -> key
The second stage of transformation is to extract the account ID as the following:
%dw 2.0
output application/java
---
payload mapObject (item, key, index) -> {  
 (key) : (item map (value) -> value.Id)  
}
Of course, I can put the two Dataweave scripts into one like the following:
%dw 2.0
output application/java
---
//payload groupBy $.Name orderBy $$
((payload groupBy (account) -> account.Name)  orderBy (item, key) -> key)
mapObject (item, key, index) -> {  
 (key) : (item map (value) -> value.Id)  
}

Key Learnings

The key concept of the above use case is to group the accounts with the same name and sort them in alphabetic order. The Mulesoft document about the groupBy and orderBy together with other core functions of dataweave can be found here The groupBy and orderBy have the similar signature:

1. groupBy(Array, (item: T, index: Number) -> R): { (R): Array }
2. groupBy({ (K)?: V }, (value: V, key: K) -> R): { (R): { (K)?: V } }
3. groupBy(Null, (Nothing, Nothing) -> Any): Null
The first function indicates that it can take array as input. The usage will like the following:
//payload groupBy $.Name
//payload groupBy (account, index) -> account.Name
payload groupBy (account) -> account.Name
The above code are the same. The first one is a short-cut version. The second and third lines are to show the lambda style. As a good developer, you should know all the syntax.

In my solution of the second stage, I use mapObject function as the following:

payload mapObject (item, key, index) -> {  
 (key) : (item map (value) -> value.Id)  
}
This is because the payload is a LinkedHashMap and the value of each HashMap entry is an array. That is why I have to use map function inside the mapObject function.

In my work, I also need to remove the type attribute in the Account object.

[
    {
        "LastModifiedDate": "2015-12-09T21:29:01.000Z",
        "Id": "0016100000Kngh3AAB",
        "type": "Account",
        "Name": "AAA Inc."
    },
...
]
Here is the transform to remove the type:
(payload orderBy (item) -> item.Name) map (account) -> {
 (account -- ['type'])
}
As you can see I have used function of --.

Summary

The key thinking of solve this kind of problem how to group, sort, and extract values from the Array or LinkedHashMap. Thus I have used the following core Dataweave functions:
  • groupBy
  • orderBy
  • map
  • mapObject
Also, we should know the short-hand way and Lambda style for using the Dataweave functions. The short-hand way is to use build-in variable $, $$, $$$.
If the payload is Array
  • $ - item
  • $$ - index
If the payload is LinkedHashMap
  • $ - value
  • $$ - key
  • $$$ - index.
The Lambda style is like the following:
(payload orderBy (item) -> item.Name) map (account, index) -> {
 (account -- ['type'])
}
mapObject (item, key, index) -> {  
 (key) : (item map (value) -> value.Id)  
}

Mule Application Hacking: Reveal Details Of A Connector's Connection

Introduction

In my last hacking post, I described the method to view the source code of Mule connectors using the ECD. In this article, I am go demonstrate the procedure to reveal the details about connection and communication within the Mule Connector. This is very useful for the purpose of trouble-shooting.

I will use Salesforce connector as example to demonstrate how to read connection details, query, etc.

Details

First, looking for the connector as shown in the following snapshots:
As you can see, I have added Salesforce connection version 9.7.7. Now, we need to expand the connector. Then looking for mule-salesforce-connector-9.7.7-mule-pluging.jar and expand the jar file as shown below:
Now we can see that the java class packages all under org.mule.extension.salesforce.

The next step is to add the package to log4j.xml in the dir of src/main/resources as shown below:

Now add the package of org.mule.extension.salesforce into the Loggers section of the log4j.xml file as shown below
    
        
        
        
        
        
        
    
          
        
 
        
            
        
    
Now, if you run the project, the console will deplay the debugging information about the connection information, request, and response from Salesforce connectors.

Saturday, July 27, 2019

Hacking Mule Application - View Source Code

Introduction

To become a real good Mule developer, we need to understand the source code of Mule connectors, components, and other internal source code. This helps us to learn the internal data model and interfaces of Mule classes. I have written a blog on how to compile and install the ECD in Anypoint Studio. However, that building system is broken for Mule 4. Hopefully, it will be fixed soon.

This article demonstrate another way to view the source code. It is not the best solution yet, but it help. The idea is to use Eclipse IDE and ECD plugin to review the java source code.

ECD is so far the best Java Decompiler available for Eclipse. The details can be found at here

Install Eclipse & ECD Plugin

First download the Eclipse EE from this site. Second, create a java project. Third, install ECD Eclipse Plugin. go to Eclipse Market Place of ECD
Drag the install to your project.
Now to preferences, you should see the Decompiler under Java as shown belog:
Fourth, import jar archive. right click the project --> select Achive File
browser the file from your local maven repository. In my case, it is at .m2/repository/org/mule/connectors/mule-http-connector/1.5.3.
Import the jar file to the newly created project. Drill down the classed you are interested as shown in the following snapshot:
Note that you have to chose the decompiler by right click java class --> open with, select as shown below:
At this point, we can review the source code. The next step is to decompiler the whole jar file and rebuild it with source. In this we can debug the source code and modify the behavior of the connectors. I will cover the procedure later.

Saturday, June 29, 2019

SSL Handshake Failure Connecting To Mulesooft Anypoint Exchange In Corporate Environment

The Issue

As a Mulesoft developer, we will need to download connectors from Anypoint exchange periodically. When we try to connect to Mulesoft Anypoint Exchange, which is the repository for Mulesoft related connectors and other libraries, we may get SSH Handshake exception, in particular, using corporate provided laptop. Here is the top part of the exception message:
eclipse.buildId=unknown
java.version=1.8.0_212
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US
Command-line arguments:  -os win32 -ws win32 -arch x86_64

org.mule.tooling.core
Error
Thu Jul 11 17:49:09 CDT 2019
The following exceptions were encountered while resolving dependency com.mulesoft.connectors:mule-salesforce-connector:9.7.6: java.lang.RuntimeException: There was an issue resolving the dependency tree for the bundleDescriptors [[BundleDescriptor{groupId='com.mulesoft.connectors', artifactId='mule-salesforce-connector', baseVersion='null', version='9.7.6', type='jar', classifier=Optional[mule-plugin]}, BundleDescriptor{groupId='org.mule.connectors', artifactId='mule-objectstore-connector', baseVersion='null', version='1.0.0', type='jar', classifier=Optional[mule-plugin]}]]
 at org.mule.maven.client.internal.AetherMavenClient.resolvePluginBundleDescriptorsDependencies(AetherMavenClient.java:322)
 at org.mule.tooling.core.m2.internal.MuleMavenClientResolver.resolvePluginDependencies(MuleMavenClientResolver.java:80)
 at org.mule.tooling.core.module.internal.runner.DownloadTask.doRun(DownloadTask.java:76)
 at org.mule.tooling.core.module.internal.runner.Task.run(Task.java:65)
 at org.mule.tooling.core.module.internal.runner.DownloadTask.run(DownloadTask.java:1)
 at org.mule.tooling.core.module.internal.runner.ArtifactResolvingRunner$ArtifactJob.run(ArtifactResolvingRunner.java:212)
 at org.eclipse.core.internal.jobs.Worker.run(Worker.java:56)
Caused by: org.eclipse.aether.collection.DependencyCollectionException: Failed to collect dependencies at com.mulesoft.connectors:mule-salesforce-connector:jar:mule-plugin:9.7.6 -> com.mulesoft.connectors:mule-connector-commons:jar:2.1.1
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:291)
 at org.eclipse.aether.internal.impl.DefaultRepositorySystem.collectDependencies(DefaultRepositorySystem.java:316)
 at org.mule.maven.client.internal.AetherMavenClient.doResolveDependencies(AetherMavenClient.java:408)
 at org.mule.maven.client.internal.AetherMavenClient.resolvePluginBundleDescriptorsDependencies(AetherMavenClient.java:314)
 ... 6 more
Caused by: org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.mulesoft.connectors:mule-connector-commons:jar:2.1.1
 at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:282)
 at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:198)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCachedArtifactDescriptor(DefaultDependencyCollector.java:535)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifactDescriptorResult(DefaultDependencyCollector.java:519)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:409)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:363)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:351)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.doRecurse(DefaultDependencyCollector.java:504)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:458)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:363)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:351)
 at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:254)
 ... 9 more
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.mulesoft.connectors:mule-connector-commons:pom:2.1.1 from/to mulesoft-releases (https://repository.mulesoft.org/releases/): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
 at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444)
 at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246)
 at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223)
 at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:267)
 ... 20 more


This article describes the procedures to fix this kind of issues.

Find The Root Cause

Problems solving skills are really about to find the root cause of the issue. In this issue, if we look the error message carefully, we will find the following:

Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.mulesoft.connectors:mule-connector-commons:pom:2.1.1 from/to mulesoft-releases (https://repository.mulesoft.org/releases/): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

What this error message is saying that Java process was trying to transfer data from host of: repository.mulesoft.org. The problem is the SSL Handshake. To resolve this kind of problem we need to import the certificate from the site to cacerts.

Tasks

In this article, I only import 3 certificates from the following site:

  • anypoint.mulesoft.com
  • maven.anypoint.mulesoft.com
  • release.anypoint.mulesoft.com

Prerequisites

  1. On Windows install Cygwind
  2. Have Admin privilege of the laptop

Solutions

When we connect to Mulesoft Anypoint Exchange, AnypointStudio needs to go through SSL Handshake procedure before we can see the download page. If this process failed, typically, AnypointStudio (a java process) could not store the certificate), the SSHHandshakeException will be thrown by the studio. The following steps will fix the issue:

Step One: Download certificate from anypoint.mulesoft.com
openssl s_client -connect anypoint.mulesoft.com:443 -showcerts </dev/null 2>/dev/null |openssl x509 -outform PEM >anypoint.pem
Step Two: Download certificate from maven.anypoint.mulesoft.com
openssl s_client -connect maven.anypoint.mulesoft.com:443 -showcerts   </dev/null 2>/dev/null | openssl x509 -outform PEM >mulesoft.maven.pem
Step Three: Download certificate from repository.anypoint.mulesoft.com
openssl s_client -connect repository.mulesoft.org:443 -showcerts   </dev/null 2>/dev/null | openssl x509 -outform PEM >mulesoft.repo.pem
Step Four: Copy the 3 certificates
cp *.pem /cygdrive/c/’Program Files’/Java/jdk1.8.0_212/jre/lib/security
As you can see that I am using cygwin on Windows. On Macbook Pro, the JAVA_HOME is may be different. In this case the JAVA_HOME is under:
/cygdrive/c/’Program Files’/Java/jdk1.8.0_212
Step Five: Import the certificate to cacerts
/cygdrive/c/’Program Files’/Java/jdk1.8.0_212/jre/lib/security
keytool -import -alias anypoint -keystore cacerts -file anypoint.pem
Repeat the same procedure for the other 2 certificaes. Step Fix: Restart Anypoint Studio

Saturday, March 16, 2019

Email Address Validation Using Dataweave Regex

Introduction

This short article is about how to use regex with Dataweave 2.0 with regard to email validation. Regex is used in Mulesoft language very wide with regard two functions, matchs(...) and match(...). It is very important to master the regular expression in order to be professional in Mulesoft integration projects. The are a lot of reference available one. Here are few:

Use Case

We expect the output of the dataweave transformation as the following depending on the validity of the email address:
[
    {
        ...
        "invalidEmail": "johndo@yahoo"
        ...
    },
    {
        ...
        "PersonEmail": "john.smith@google.com"
        ...
    },
    ...
]

Invalid Emails

The following types of emails are invalid:
  1. beginning with a dot: .gary.liu@google.com
  2. ending with a dot: gary.liu@google.com.
  3. double dots: gary.liu@google..com
  4. domain name contains underscore: gary.liu@att_rr.com
  5. domain name contains space: gayr.liu@att rr.com
  6. domain name contains and of the following: ,<>/[]
  7. no organization email: gary@google

Solution

%dw 2.0
output application/json
var regexEmail = /^[^.][a-zA-Z0-9.!#$%&’*+\/=?^_`{|}~-]+@[a-zA-Z0-9-](?!.*?\.\.)[^_ ; ,<>\/\\]+(?:\.[a-zA-Z0-9-]+)[^.]*$/
---
payload map using (email = $.email) {
 (validEmail: email) if (email matches regexEmail),
 (invalidEmail: email) if ( not  (email matches regexEmail))
}
The above dataweave script is self-explanatory. Few explanation is required if you are not very familiar with regular expression:
  1. negation: [^_;,\.] this expression means if the email domain contains underscore _ , semi-coma, etc. is not valid email
  2. simple ^ and $ represent the beginning and end of the line
  3. [a-zA-Z0-9] mean any charater A a, Bb ... Zz, or 0 to 9 digits are valid
  4. + sign means to match one for more
  5. * sign matches 0 or more
  6. ?! means not include, (?!.*?\.\.) --> not include double dots: ..

Saturday, March 2, 2019

How To Resolve Issue With: "General SSLEngine problem"

The Background

This happens when you enable the HTTPS with your own certificates. In my case, I have configured Anypoint runtime fabrics with self generated certification using the following command:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
The above command generates two files: cert.pem and key.pem. The purpose of them is beyond the scope of the article. The error will occur when the local mule flow call the remote application which is deployed in the Anypoint Runtime Fabrics.

Solution

To resolve this problem, we just need to import the cert.pem to cacerts file. The command is (on MacOs):
cd /Library/Java/JavaVirtualMachines/jdk1.8.0_181.jdk/Contents/Home/jre/lib/security
sudo keytool -import -trustcacerts -keystore cacerts -storepass changeit -alias rogers-poc-cert -file /Users/gl17/anypoint/certs/poc/cert.pem
Make sure restart Anypoint Studio.

Monday, January 14, 2019

Deploy Mule 4 Application To Anypoint Runtime Fabric Using Maven Plugin

Introduction

In CI/CD process, it is very common to use Mule Maven plugin to build and deploy application to the Cloudhub, on-premise private cloud like AWS, AZure, Google Cloud, etc. Since Mule 4, a lot of changes related to the deployment has changed, in particular, related to the Mule Runtime Fabric (RTF). Actually, RTF is a completely new infrastructure for Mule application deployment. I will cover more on that topic later. In this article, I am going to cover the following topics related to the deployment to Anypoint Runtime Fabric (RTF):
  1. Prepare pom.xml setup to deploy mule project to Anypoint RTF
  2. Encrypt password
  3. Troubleshooting

If everything works, at the end, we should be able to achieve the following goals:

  • deploy mule projects (assets) to Anypoint Exchange
  • deploy mule projects to Anypoint Runtime Fabric

In order for mule maven plugin to work, we need change the following two files:

  • pom.xml
  • ./m2/settings.xml

The complete project for this article can be find at my github repository

Settings.xml

In order to deploy mule applications using mule maven plugin, we need to set the Anypoint credentials, which are the ones we use to login to anypoint portal (http://anypoint.mulesoft.com). The best way to do this is to add server information in the .m2/settings.xml as the following:
        
                
                      ExchangeRepository
                      gary_liu_client
                      {5XLgXNDBGBIHa99xNAyJ6gL+ZxyUiyIJNHWu0H7Ctew=}
                
        
The password is encrypted. To encrypt password we need to add another file namely: ~/.m2/settings-security.xml:

  {Vq5Aso1ZkO4HdJrUscJTZEii4BcFy+khGiGxDNVNgc4=}

The encrypt master password in the above is created by the following commands:
$ mvn --encrypt-master-password MasterPassword
{+4nnH6EW9HcHAHYBGnloFCAZZHSC4W3Xp9Zls0LvBqk=}
Once we encrypted the master password, we can encrypt the password to the anypoint portal as the following:
mvn --encrypt-password AnypointPortalPassword
For more details about the maven password encryption, you may refer to the following page: https://maven.apache.org/guides/mini/guide-encryption.html

pom.xml



 4.0.0

 fea874ca-11d9-4779-b1ce-90d49f738259
 mule-maven-plugin
 1.0.2
 mule-application

 mule-maven-plugin

 
  UTF-8
  UTF-8
  3.2.3
  https://anypoint.mulesoft.com
  MC
  QA

  4.1.4
  gary-deployment
  100m
  500Mi
 

 
  
   
    org.mule.tools.maven
    mule-maven-plugin
    ${mule.maven.plugin.version}
    true
    
     
      ${anypoint.uri}
      ${anypoint.provider}
      ${deployment.environment}
      ${deployment.target}
      ${app.runtime}
      ExchangeRepository
      ${app.name}
      
       ${deployment.replica}
       ${app.cores}
       ${app.memory}
      
     
     mule-application
    
   

   
    org.codehaus.mojo
    properties-maven-plugin
    1.0.0
    
     
      initialize
      
       read-project-properties
      
      
       
        ${maven.properties}
       
      
     
    
   
  
 

 
  
   ExchangeRepository
   Corporate Repository
   https://maven.anypoint.mulesoft.com/api/v1/organizations/${groupId}/maven
   default
  
 

 
           ......
 
        ......

The details are explained in the next section. But one item I must point out here: <groupId>fea874ca-11d9-4779-b1ce-90d49f738259</groupId>. The groupId is your Anypoint platform organization ID.

Explanations

The configuration in both pom.xml and settings.xml are pretty straightforward. Few items worths to explain. First, what is the latest version of mule maven plugin? This can be found at: https://docs.mulesoft.com/release-notes/mule-maven-plugin/mule-maven-plugin-release-notes. Currently, the latest verson is 3.2.3.

Secondly, I use another plugin in addition to the mule maven plugin, namely, properties-maven-plugin. This plugin allow us to pass a property file to the pom.xml.

   
    org.codehaus.mojo
    properties-maven-plugin
    1.0.0
    
     
      initialize
      
       read-project-properties
      
      
       
        ${maven.properties}
       
      
     
    
   
Note that in the configuration, we put ${maven.properties} variable. This allow as to pass the property file as:
mvn clean deploy -DmuleDeploy -Dmaven.properties=src/main/resources/mule.rtf.deploy.properties
Thirdly, note that in the mule maven plugin, I use <server>ExchangeRepository</server> as shown below. The ExchangeRespository is the server defined in the ~/.m2/settings.xml
    
     
      https://anypoint.mulesoft.com
      MC
      QA
      qa-azure-rtf
      4.1.4
      ExchangeRepository
      ${app.name}
      
       1
       100m
       500Mi
      
     
     mule-application
    
The details for the parameters can be referred at: https://docs.mulesoft.com/mule-runtime/4.1/runtime-fabric-deployment-mmp-reference.

Publish Assets To Exchange

In order to deploy mule application to RTF using Mule Maven Plugin, we must publish the application (asset to Anypoint Exchange). To me this is extra step unnecessary. To publish an artifact (asset) to the Anypoint Exchange, we run the following command:
mvn clean package deploy -Dmaven.properties=src/main/resources/mule.rtf.deploy.properties
After it completes the publishing, you can verify the result by the Anypoint Port using the following web address:
https://anypoint.mulesoft.com/exchange/{groupId}/{artifactId}
Here is an my example:
https://anypoint.mulesoft.com/exchange/fea874ca-11d9-4779-b1ce-90d49f738259/mule-maven-plugin/

Deploy Application To RTF

Deployment to RTF could be slow and sometimes could take very long time if it failed. I think the plugin should be improved on this by using asynchronous deployment instead of waiting and constantly checking with the runtime manager. Any way the deployment to RTF can be done using the following command:
mvn clean package deploy -DmuleDeploy -Dmaven.properties=src/main/resources/mule.rtf.deploy.properties

Take Aways

The commands used in the process:
mvn --encrypt-master-password GaryLiu1234
mvn --encrypt-password GaryLiu1234
mvn clean package deploy-Dmaven.properties=src/main/resources/mule.rtf.deploy.properties
mvn clean package deploy -DmuleDeploy -Dmaven.properties=src/main/resources/mule.rtf.deploy.properties
Web address to check assets in the Anypoint Exchange in the Anypoint Portal:
https://anypoint.mulesoft.com/exchange/{groupId}/{artifactId} Key Considerations using the Mule Maven Plugin in the CI/CD:
  • Make sure don't wait for the deployment to finish as it can take very long time
  • There are rest API you can check the deployment process. You should use the APIs to check the deploy
  • At the moment, anypoint-cli is not working for RTF

Anypoint Studio Error: The project is missing Munit lIbrary to run tests

Anypoint Studio 7.9 has a bug. Even if we following the article: https://help.mulesoft.com/s/article/The-project-is-missing-MUnit-libraries-...