Surfing through logs with less

shift g: goes to end.
shift f: tails the file.

  – ctrl+c: switch to normal mode (no tails)

?: search upwards
/: search downward
 – n: search next match in current direction
 – shift n: search next match in counter direction
Also useful to find 2 or more words in one line is the regex:
Word1.+Word2 finds both words in the same line
As an extra to review logs, it is also useful the command grep with options after and before:
grep -A: lines after match
grep -B: lines before match

Pass parameters to java main method using graddle

To pass for example a param “-a” with value “192.168.99.100:7051” to the static void main method that tha graddle task will run,  you will need to :

  • add following entry to gradle.build file, or assert that it is already there:
    run {
        if (project.hasProperty("appArgs")) {
            args = Eval.me(appArgs)
        }
    }
  • run graddle on the following form:
    gradle  run  -PappArgs="['-a',  '192.168.99.100:7051']"

Create own collectors

Sometimes it is useful to be able perform several actions extracting several actions with a stream pass. It can be achieved building our own collector. For example, suppose we need to get the maximal and minimal values of a series. So can we achieve it using collectors.

public class MaxMinCollector2 implements Collector<Integer, MaxMinContainer, MaxMinContainer>{

    public void accumulate(MaxMinContainer container, Integer val){

        if(container.max == null){
            container.max = val;
        }else if(container.max < val){
            container.max = val;
        }

        if(container.min == null){
            container.min = val;
        }else if(container.min > val){
            container.min = val;
        }

    }

    public MaxMinContainer combine(MaxMinContainer a, MaxMinContainer b){
        if(a.max == null){
            b.getMax().ifPresent(v -> a.max = v);
        }else {
            b.getMax().ifPresent(v -> a.max = a.max < v ? v : a.max);
        }

        if(a.min == null){
            b.getMin().ifPresent(v -> a.min = v);
        }else {
            b.getMax().ifPresent(v -> a.min = a.min > v ? v : a.min);
        }

        return a;
    }

    @Override
    public Supplier<MaxMinContainer> supplier() {
        return MaxMinContainer::new;
    }

    @Override
    public BiConsumer<MaxMinContainer, Integer> accumulator() {
        return this::accumulate;
    }

    @Override
    public BinaryOperator<MaxMinContainer> combiner() {
        return this::combine;
    }

    @Override
    public Function<MaxMinContainer, MaxMinContainer> finisher() {
        return (a) -> a;
    }

    @Override
    public Set<Characteristics> characteristics() {
        return new HashSet<>(Arrays.asList(Characteristics.IDENTITY_FINISH, Characteristics.UNORDERED));
    }
}

and its corresponding test:

package de.lacambra.utils.collectors;

import org.junit.Before;
import org.junit.Test;

import java.util.stream.IntStream;

import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;

public class MaxMinCollectorTest {

    MaxMinCollector cut;

    @Before
    public void setUp() throws Exception {
        cut = new MaxMinCollector();
    }

    @Test
    public void testMaxMinSerie() {

        MaxMinCollector result = IntStream.of(1, 2, 3, 4, 5, 6)
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().get(), is(6));
        assertThat(result.getMin().get(), is(1));

        result = IntStream.of(1000, 2, 3342, 421, 523, 6)
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().get(), is(3342));
        assertThat(result.getMin().get(), is(2));
    }

    @Test
    public void emptySerie(){
        MaxMinCollector result = IntStream.of()
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().isPresent(), is(false));
        assertThat(result.getMin().isPresent(), is(false));
    }

    @Test
    public void oneValueSerie(){
        MaxMinCollector result = IntStream.of(34)
                .collect(MaxMinCollector::new, MaxMinCollector::accumulate, MaxMinCollector::combine);

        assertThat(result.getMax().get(), is(34));
        assertThat(result.getMin().get(), is(34));
    }
}

Create a correct 201 Created response with JAX-RS

@POST
@Path("project")
public Response createProject(@Context UriInfo uriInfo, JsonObject json) {
        Project project = projectConverter.fromJson(json);
        project.setWorkspace(getCurrentWorkspace());
        project = em.merge(project);

        URI uri = uriInfo.getBaseUriBuilder()
                .path(ProjectResource.class)
                .resolveTemplate(PathExpressions.workspaceId, getCurrentWorkspace().getId())
                .resolveTemplate(PathExpressions.projectId, project.getId())
                .build();

        return Response.created(uri).build();
The @Context UriInfo uriInfo provides information about the current URI. 
The .path(ProjectResource.class)  call will return the path used for the ProjectResource.class. 
The .resolveTemplate("{workspaceId:\\d+}", getCurrentWorkspace().getId())  will replace the workspaceId template variable for the actual wokspace id.

Once the whole path has been created, it is only needed to put it into a created response.

Collect into Jsonp JsonArray using without using foreach

Each collector has three parts:

  • A supplier: provides with instances of the accumulator.
  • An accumulator: accumulates the objects being collected. Several instances of accumulator can be used.
  • A combiner: combines all the accumulator putting all collected objects together.

For the JsonArray the combiner, accumulator  and combiener are respectively:

JsonArrayBuilder createArrayBuilder()
JsonArrayBuilder add(JsonValue value)
JsonArrayBuilder add(JsonArrayBuilder builder)

    public JsonArray getArray(Jsonable[] objects) {
        return Stream.of(objects).map(Jsonable::toJson)
                .collect(
                        Json::createArrayBuilder,
                        JsonArrayBuilder::add,
                        JsonArrayBuilder::add
                ).build();

    }

    public static class Jsonable {

        public JsonObject toJson() {
            return Json.createObjectBuilder().add("someId", LocalTime.now().toString()).build();
        }
    }

Configure a local development environment on MacOS with Docker

Using dnsmasq for development domains

With dnsmasq I am redirecting my developer domains to its corresponding target:

  • *.dev: localhost
  • *.dock: docker container

So no more ip stuff on the browser url.

Install dmasq on mac OS X:

(from http://passingcuriosity.com/2013/dnsmasq-dev-osx/)

brew install dnsmasq
# Copy the default configuration file.
cp $(brew list dnsmasq | grep /dnsmasq.conf.example$) /usr/local/etc/dnsmasq.conf
# Copy the daemon configuration file into place.
sudo cp $(brew list dnsmasq | grep /homebrew.mxcl.dnsmasq.plist$) /Library/LaunchDaemons/
# Start Dnsmasq automatically.
sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist

Configure dmasq:

(/usr/local/etc/dnsmasq.conf)

#Redirect *.dev urls to 127.0.0.1
address=/dev/127.0.0.1

#Redirect *.dock urls to docker host url (192.168.99.100 in my case)
address=/dock/192.168.99.100

and restart dnsmasq:

sudo launchctl stop homebrew.mxcl.dnsmasq
sudo launchctl start homebrew.mxcl.dnsmasq

Then you need to say the OS to use Dnsmasq for the desired domain. Most UNIX-like operating systems have a configuration file called /etc/resolv.conf which controls the way DNS queries are performed, including the default server to use for DNS queries (this is the setting that gets set automatically when you connect to a network or change your DNS server/s in System Preferences).

OS X also allows you to configure additional resolvers by creating configuration files in the/etc/resolver/ directory. This directory probably won’t exist on your system, so your first step should be to create it:

sudo mkdir -p /etc/resolver

Now you should create a new file in this directory for each resolver you want to configure.  There a number of details you can configure for each resolver but I generally only bother with two:

  • the name of the resolver (which corresponds to the domain name to be resolved); and
  • the DNS server to be used.

Create a new file with the same name as your new top-level domain (I’m using dec and dock) in the /etc/resolver/ directory and add a nameserver to it by running the following commands:

sudo tee /etc/resolver/dev >/dev/null <<EOF
nameserver 127.0.0.1
EOF

Binding all services

I am developing an application that needs several resources:

  • Wildfly AS
  • Mysqls DB
  • keycloak
  • An nginx server as proxy.

To avoid to write each time the ports, I am using nginx to redirect the requests to the correct container. All the containers must be able to locate the proxy server and the proxy will do the rest. To achieve that, when we run a container we need to link it to the proxy server with the domain of the required service. E.g. to get my widlfly App signing in using the keycloak server, we will run the containers as follows:

docker run --link proxy:keycloak.dock some/wildlfy
docker run --link proxy:app.dock some/keycloak

Configuring the proxy:

In order to allow the proxy to bind all services we need to add some server configs: For the widlfy app, a file named app.dock will be added in the conf.d directory:

server {
    listen      80;
    server_name app.dock;

        location / {
            proxy_pass http://${docker-host-ip}:${widlfly-app-port};
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      # needed to map the ip address to the correct domain
      proxy_redirect http://${docker-host-ip}:${widlfly-app-port} http://app.dock;
            proxy_redirect ${docker-host-ip}:${keycloak-port} http://keycloak.dock;
        }

    access_log /var/log/nginx/app.dock_access.log;
        error_log /var/log/nginx/app.dock_error.log;
}

and for the keycloak:

server {
    listen      80;
    server_name app.dock;

        location / {
            proxy_pass http://${docker-host-ip}:${keycloak-port};
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      # needed to map the ip address to the correct domain
            proxy_redirect ${docker-host-ip}:${keycloak-port} http://keycloak.dock;
        }

    access_log /var/log/nginx/app.dock_access.log;
        error_log /var/log/nginx/app.dock_error.log;
}

Get java.util.logging working on UnitTesting

If you need to active the java.util.logging on your test, you can achieve it just adding the vm option
-Djava.util.logging.config.file=/path/to/logging.properties

where logging.properties can be something like

handlers = java.util.logging.ConsoleHandler
.level=INFO
your.package.level = FINE
java.util.logging.ConsoleHandler.level = FINE

You can find a more complete example on https://svn.apache.org/repos/asf/river/jtsk/skunk/surrogate/testfiles/logging.properties

,

Run nano on docker ubuntu image

I have tried to run nano on an ubuntu docker image and after installing it I always have this error:
Error opening terminal: unknown.

The solution is so easy as to run into the docker image export TERM=xterm

Problem is that it does not survive the restart, as in any bash session actually. I will try to add it on the Dockerfile but why that the term is set on the run command I am not very optimistic with it.

I have tried to run nano on an ubuntu docker image and after installing it I always have this error:
Error opening terminal: unknown.

The solution is so easy as to run into the docker image export TERM=xterm