Create own collectors

Sometimes it is useful to be able perform several actions extracting several actions with a stream pass. It can be achieved building our own collector. For example, suppose we need to get the maximal and minimal values of a series. So can we achieve it using collectors.

public class MaxMinCollector2 implements Collector<Integer, MaxMinContainer, MaxMinContainer>{

    public void accumulate(MaxMinContainer container, Integer val){

        if(container.max == null){
            container.max = val;
        }else if(container.max < val){
            container.max = val;
        }

        if(container.min == null){
            container.min = val;
        }else if(container.min > val){
            container.min = val;
        }

    }

    public MaxMinContainer combine(MaxMinContainer a, MaxMinContainer b){
        if(a.max == null){
            b.getMax().ifPresent(v -> a.max = v);
        }else {
            b.getMax().ifPresent(v -> a.max = a.max < v ? v : a.max);
        }

        if(a.min == null){
            b.getMin().ifPresent(v -> a.min = v);
        }else {
            b.getMax().ifPresent(v -> a.min = a.min > v ? v : a.min);
        }

        return a;
    }

    @Override
    public Supplier<MaxMinContainer> supplier() {
        return MaxMinContainer::new;
    }

    @Override
    public BiConsumer<MaxMinContainer, Integer> accumulator() {
        return this::accumulate;
    }

    @Override
    public BinaryOperator<MaxMinContainer> combiner() {
        return this::combine;
    }

    @Override
    public Function<MaxMinContainer, MaxMinContainer> finisher() {
        return (a) -> a;
    }

    @Override
    public Set<Characteristics> characteristics() {
        return new HashSet<>(Arrays.asList(Characteristics.IDENTITY_FINISH, Characteristics.UNORDERED));
    }
}

and its corresponding test:

package de.lacambra.utils.collectors;

import org.junit.Before;
import org.junit.Test;

import java.util.stream.IntStream;

import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;

public class MaxMinCollectorTest {

    MaxMinCollector cut;

    @Before
    public void setUp() throws Exception {
        cut = new MaxMinCollector();
    }

    @Test
    public void testMaxMinSerie() {

        MaxMinCollector result = IntStream.of(1, 2, 3, 4, 5, 6)
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().get(), is(6));
        assertThat(result.getMin().get(), is(1));

        result = IntStream.of(1000, 2, 3342, 421, 523, 6)
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().get(), is(3342));
        assertThat(result.getMin().get(), is(2));
    }

    @Test
    public void emptySerie(){
        MaxMinCollector result = IntStream.of()
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().isPresent(), is(false));
        assertThat(result.getMin().isPresent(), is(false));
    }

    @Test
    public void oneValueSerie(){
        MaxMinCollector result = IntStream.of(34)
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().get(), is(34));
        assertThat(result.getMin().get(), is(34));
    }
}

Create a correct 201 Created response with JAX-RS

@POST
@Path("project")
public Response createProject(@Context UriInfo uriInfo, JsonObject json) {
        Project project = projectConverter.fromJson(json);
        project.setWorkspace(getCurrentWorkspace());
        project = em.merge(project);

        URI uri = uriInfo.getBaseUriBuilder()
                .path(ProjectResource.class)
                .resolveTemplate(PathExpressions.workspaceId, getCurrentWorkspace().getId())
                .resolveTemplate(PathExpressions.projectId, project.getId())
                .build();

        return Response.created(uri).build();
The @Context UriInfo uriInfo provides information about the current URI. 
The .path(ProjectResource.class)  call will return the path used for the ProjectResource.class. 
The .resolveTemplate("{workspaceId:\\d+}", getCurrentWorkspace().getId())  will replace the workspaceId template variable for the actual wokspace id.

Once the whole path has been created, it is only needed to put it into a created response.

Collect into Jsonp JsonArray using without using foreach

Each collector has three parts:

  • A supplier: provides with instances of the accumulator.
  • An accumulator: accumulates the objects being collected. Several instances of accumulator can be used.
  • A combiner: combines all the accumulator putting all collected objects together.

For the JsonArray the combiner, accumulator  and combiener are respectively:

JsonArrayBuilder createArrayBuilder()
JsonArrayBuilder add(JsonValue value)
JsonArrayBuilder add(JsonArrayBuilder builder)

    public JsonArray getArray(Jsonable[] objects) {
        return Stream.of(objects).map(Jsonable::toJson)
                .collect(
                        Json::createArrayBuilder,
                        JsonArrayBuilder::add,
                        JsonArrayBuilder::add
                ).build();

    }

    public static class Jsonable {

        public JsonObject toJson() {
            return Json.createObjectBuilder().add("someId", LocalTime.now().toString()).build();
        }
    }

Configure a local development environment on MacOS with Docker

Using dnsmasq for development domains

With dnsmasq I am redirecting my developer domains to its corresponding target:

  • *.dev: localhost
  • *.dock: docker container

So no more ip stuff on the browser url.

Install dmasq on mac OS X:

(from http://passingcuriosity.com/2013/dnsmasq-dev-osx/)

brew install dnsmasq
# Copy the default configuration file.
cp $(brew list dnsmasq | grep /dnsmasq.conf.example$) /usr/local/etc/dnsmasq.conf
# Copy the daemon configuration file into place.
sudo cp $(brew list dnsmasq | grep /homebrew.mxcl.dnsmasq.plist$) /Library/LaunchDaemons/
# Start Dnsmasq automatically.
sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist

Configure dmasq:

(/usr/local/etc/dnsmasq.conf)

#Redirect *.dev urls to 127.0.0.1
address=/dev/127.0.0.1

#Redirect *.dock urls to docker host url (192.168.99.100 in my case)
address=/dock/192.168.99.100

and restart dnsmasq:

sudo launchctl stop homebrew.mxcl.dnsmasq
sudo launchctl start homebrew.mxcl.dnsmasq

Then you need to say the OS to use Dnsmasq for the desired domain. Most UNIX-like operating systems have a configuration file called /etc/resolv.conf which controls the way DNS queries are performed, including the default server to use for DNS queries (this is the setting that gets set automatically when you connect to a network or change your DNS server/s in System Preferences).

OS X also allows you to configure additional resolvers by creating configuration files in the/etc/resolver/ directory. This directory probably won’t exist on your system, so your first step should be to create it:

sudo mkdir -p /etc/resolver

Now you should create a new file in this directory for each resolver you want to configure.  There a number of details you can configure for each resolver but I generally only bother with two:

  • the name of the resolver (which corresponds to the domain name to be resolved); and
  • the DNS server to be used.

Create a new file with the same name as your new top-level domain (I’m using dec and dock) in the /etc/resolver/ directory and add a nameserver to it by running the following commands:

sudo tee /etc/resolver/dev >/dev/null <<EOF
nameserver 127.0.0.1
EOF

Binding all services

I am developing an application that needs several resources:

  • Wildfly AS
  • Mysqls DB
  • keycloak
  • An nginx server as proxy.

To avoid to write each time the ports, I am using nginx to redirect the requests to the correct container. All the containers must be able to locate the proxy server and the proxy will do the rest. To achieve that, when we run a container we need to link it to the proxy server with the domain of the required service. E.g. to get my widlfly App signing in using the keycloak server, we will run the containers as follows:

docker run --link proxy:keycloak.dock some/wildlfy
docker run --link proxy:app.dock some/keycloak

Configuring the proxy:

In order to allow the proxy to bind all services we need to add some server configs: For the widlfy app, a file named app.dock will be added in the conf.d directory:

server {
    listen      80;
    server_name app.dock;

        location / {
            proxy_pass http://${docker-host-ip}:${widlfly-app-port};
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      # needed to map the ip address to the correct domain
      proxy_redirect http://${docker-host-ip}:${widlfly-app-port} http://app.dock;
            proxy_redirect ${docker-host-ip}:${keycloak-port} http://keycloak.dock;
        }

    access_log /var/log/nginx/app.dock_access.log;
        error_log /var/log/nginx/app.dock_error.log;
}

and for the keycloak:

server {
    listen      80;
    server_name app.dock;

        location / {
            proxy_pass http://${docker-host-ip}:${keycloak-port};
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      # needed to map the ip address to the correct domain
            proxy_redirect ${docker-host-ip}:${keycloak-port} http://keycloak.dock;
        }

    access_log /var/log/nginx/app.dock_access.log;
        error_log /var/log/nginx/app.dock_error.log;
}

Get java.util.logging working on UnitTesting

If you need to active the java.util.logging on your test, you can achieve it just adding the vm option
-Djava.util.logging.config.file=/path/to/logging.properties

where logging.properties can be something like

handlers = java.util.logging.ConsoleHandler
.level=INFO
your.package.level = FINE
java.util.logging.ConsoleHandler.level = FINE

You can find a more complete example on https://svn.apache.org/repos/asf/river/jtsk/skunk/surrogate/testfiles/logging.properties

,

Run nano on docker ubuntu image

I have tried to run nano on an ubuntu docker image and after installing it I always have this error:
Error opening terminal: unknown.

The solution is so easy as to run into the docker image export TERM=xterm

Problem is that it does not survive the restart, as in any bash session actually. I will try to add it on the Dockerfile but why that the term is set on the run command I am not very optimistic with it.

I have tried to run nano on an ubuntu docker image and after installing it I always have this error:
Error opening terminal: unknown.

The solution is so easy as to run into the docker image export TERM=xterm

Use multiple parameters with jjs in linux shebang script

If you try to use multiple parameters in the script on a linux OS like this,
#!/usr/bin/jjs -strict -scripting -fv

you will see something like this:
"-strict -scripting -fv" is not a recognized option. Use “-h” or “-help” to see a list of all supported options.

That happens because the linux interpreter is using all the parameters as a unique parameter.
The workaround is to use the *-J-Dnashorn.args* like

#!/usr/bin/jjs -J-Dnashorn.args= -strict -scripting -fv

If you try to use multiple parameters in the script on a linux OS like this,

#!/usr/bin/jjs -strict -scripting -fv

you will see something like this:

"-strict -scripting -fv" is not a recognized option. Use "-h" or "-help" to see a list of all supported options.

That happens because the linux interpreter is using all the parameters as a unique parameter.
The work around is to use the -J-Dnashorn.args like
#!/usr/bin/jjs -J-Dnashorn.args= -strict -scripting -fv
and it will work like a breeze.

Load test data to DB with javaEE

In the persistence.xml file, it is possible to use the property javax.persistence.sql-load-script-source
This property allows you to populate your datasource with given data.
The value is just a sql script file containing the insert statements to be used.
Analogous, you can also use properties to drop data. Many other options are available. If you are interested, just take a look at the section *9.4 Schema Generation* from [*JSR-000338 JavaTM Persistence 2.1*](http://download.oracle.com/otndocs/jcp/persistence-2_1-fr-eval-spec/index.html)

My persistence.xml file is like follows:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="Cookinghelper" transaction-type="JTA">
    <jta-data-source>java:jboss/datasources/cookinghelper</jta-data-source>
    <exclude-unlisted-classes>false</exclude-unlisted-classes>
    <properties>
        <property name="hibernate.show_sql" value = "false" />
        <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
        <property name="javax.persistence.sql-load-script-source" value="META-INF/test-data.sql"/>
    </properties>
</persistence-unit>
</persistence>

Run Vert.x app from intellij

To run Vert.x on intelliJ, it is only required to create a standard application Launcher with the following parameters:

  • Main class: the vert.x main class, i.e. io.vertx.core.Starter or whatever it is for your version.
  • VM options: whatever you need or empty
  • Program arguments: run de.lacambra.vertx.MyFirstVerticle
  • Working directory: normally your project directory