Content from 2020-04
Today I've had to dig deeper into some problem authenticating against an HTTPS API. This client was sending Basic Authentication information following a 3XX redirect, which then would make the second server (well, S3 really) return a 400 Bad Request, since it's refusing to deal with more than one authentication method at the same time.
This is all and good, but debugging what was actually being sent is a
little bit more difficult if curl
is not the method of choice.
Instead I found the
-Djavax.net.debug=all
option for the JVM. This will make it dump a lot of information
throughout a connection. Mostly that's already enough to debug the
issue, since a hexdump of the HTTP traffic is included. On the other
hand it's also pretty verbose.
Another option is the slightly more involved jSSLKeyLog, which requires the use of a JVM parameter to include the Java agent, e.g. for SBT like so:
env JAVA_OPTS="-javaagent:jSSLKeyLog.jar==/jsslkeylog.log" sbt
Two more notes here: Compiling the tool is really easy, once cloned mvn
package
results in a ready-to-use JAR file. Also the log contains more
information when two equal signs are used (handy for manual inspection).
This file can then be directly fed into WireShark ("Edit",
"Preferences", "Protocols", "TLS", "(Pre-)-Master-Secret log filename")
and will then allow the decoding of the captured network traffic
(e.g. via tcpdump -i any -s 0 -w dump.pcap
).
Docker is ubiquitous in many projects and therefore it may be useful to dig into more detail about its inner workings. Arguably those aren't too complicated to build a smallish program that does the essentials in a few hours.
The CodeCrafters challenges focus on exactly this kind of idea, taking an existing tool and rebuilding it from scratch. Since they're currently in Early Access, I've only had the opportunity to try out the Docker and Redis challenges so far, but I thought maybe a few insights from them would be good to share.
Part of the challenge is to run the entrypoint of a container; using Go
it's actually fairly easy to run external programs. Using the
os/exec package is straightforward, even
redirecting I/O is easy enough by looking at the
Cmd structure a bit closer and
assigning values to the Stdin
, Stdout
and Stderr
fields. Also the
exit status can be easily gotten from the error
return value by
checking for ExitError
(only if it was not successful, that is,
non-zero):
if err = cmd.Run(); err != nil {
if exitError, ok := err.(*exec.ExitError); ok {
...
}
}
Interestingly enough the SysProcAttr
field exposes some functionality
that is a bit more difficult to use in, say, C. While using the
syscall package is
possible, it's mostly easier to assign a few values in that field
instead, using the definition of the
SysProcAttr structure
itself.
Later on there's also the need to parse some JSON - that's again easily
done with the standard library, using
encoding/json
, in
particular
Unmarshal
to a
map[string]interface{}
(in case we just want to grab a top-level entry
in a JSON object), or to a pointer of a custom class using structure
tags like so:
type Foo struct {
Bars []Bar `json:"bars"`
}
type Bar struct {
Baz string `json:"baz"`
}
...
foo := Foo{}
if err := json.Unmarshal(body, &foo); err != nil {
panic(err)
}
for _, bar := range foo.Bars {
println(bar.Baz)
}
The Redis challenge is comparatively more contained to just using
standard library tools, the most interesting thing I've noticed was that
there's now a concurrency-friendly map implementation called
sync.Map
, so no external
synchronization primitive is needed.
What else helped is the redis-cli
tool, though I had to find out for myself that it doesn't interpret the
specification very strictly, in fact, just about everything returned
from the server response will be printed, even when not valid according
to the spec.
Overall the biggest challenge here might be to accurately parse the command input and deal with expiration (I simply chose a lazy approach there, instead of clearing out the map on a timer I suppose - this will of course not be memory-friendly long-term, but for implementing a very simple Redis server it's more than enough to pass all tests).
After working with Scala for a while now, I thought it would be good to write down a couple of notes on my current testing setup, in particular with regards to which libraries I've settled on and which style of testing I've ended up using.
Tests end up in the same package as the code that's tested. A group of
tests are always in a class with the Tests
suffix, e.g. FooTests
.
If it's about a particular class Foo
the same applies.
scalatest
is used as the testing
framework, using
AnyWordSpec
,
that means we're using the should
/
in
pattern.
For mocking the only addition is
MockitoSugar
to make things more
Scala-ish.
How does it look like?
package com.example.foo
import org.mockito.MockitoSugar
import org.scalatest.wordspec.AnyWordSpec
class FooTests extends AnyWordSpec with MockitoSugar {
"Foo" should {
"do something" in {
val bar = mock[Bar]
val foo = new Foo(bar)
foo.baz(42L)
verify(bar).qux(42L)
}
}
}
Easy enough. There's also some more syntactic sugar for other Mockito
features, meaning
ArgumentMatchersSugar
should also be imported when
needed. Same as scalatest
has a number of additional helpers for
particular types like Option
or Either
,
e.g. OptionValues
and
EitherValues
.
class BarTests extends AnyWordSpec with Matchers with EitherValues with OptionValues {
"Bar" should {
"do something else" in {
val bar = new Bar
bar.qux(42L).left.value should be(empty)
bar.quux().value shouldBe "a value"
}
}
}
This can be done to the extreme, but usually it looks easier to me to simply assign highly nested values to a variable and continue with matchers on that variable instead.
Since sbt
is often used, the two test dependencies would look like this:
libraryDependencies ++= Seq(
"org.scalatest" %% "scalatest" % "3.1.1" % Test,
"org.mockito" %% "mockito-scala-scalatest" % "1.13.0" % Test,
)