Note to self: whenever you try to animate a view that is either GONE or has height 0 this will not work the first time. The moment the user touches the screen or an UI update is triggered only then will the animation start.
Of course I found this Stack Overflow question about not starting animations, but the view was defined as being VISIBLE. I also explicitly made the view visible in code – just to be sure – but this was not enough. One response suggested setting the height to something larger than 0. I gave this some thought, but instead of manipulating the height I put in
After some crazy hours investigating threads, creating many Handlers and Runnables I figured to give this height-thingy an extra try. And it worked, finally!
So, whenever you animate the height of a view starting from 0, use:
… and you get an exception upon either writing, or parsing this string after it was only partially written. You have several megabytes of this so just writing to a String will not work.
You will need a JSON printer that can output streams or handle writers. After searching I found Scala Stuff’s json-parser that has support for several JSON AST libraries like Spray. With json-parser writing large JSON objects is just a matter of:
val writer = new PrintWriter(new File(file))
new SprayJsonPrinter(writer, 0)(largeMapObject.toJson)
and reading is just as easy:
val reader = Source.fromFile(file).bufferedReader()
val jsValue = SprayJsonParser.parse(reader)
Although this basic usage of Spray and Scala Stuff’s json-parser allow you to parse and write large JSON object, your machines memory is still a bottleneck as they keep the resulting JSON AST in memory. When dealing with huge JSON streams I recommend that you take a look at json-parser’s JsonHandler trait or Play Frameworks JSON Transformers.
Today I disabled the NewRelic JVM agent on one of my projects. While the Play Framework server was outputting a ZipOutputStream to a client, the NewRelic agent would for some reason gather massive amounts of data and cause the JVM to Garbage Collect continuously until the app became unresponsive, and finally crashed:
Uncaught error from thread [play-akka.actor.default-dispatcher-33] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play]
java.lang.OutOfMemoryError: GC overhead limit exceeded
Uncaught error from thread [play-scheduler-1] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[play]
java.lang.OutOfMemoryError: GC overhead limit exceeded
[ERROR] [02/27/2015 14:45:10.002] [application-scheduler-1] [ActorSystem(application)] exception on LARS? timer thread
java.lang.OutOfMemoryError: GC overhead limit exceeded
Since september I had been using NewRelic’s v3.10.0 agent. The specific zip streaming that was causing the issue was a feature supposed to be used starting Februari 27, but of course the feature was tested before. Both locally and in production the feature seemed to work, for smaller amounts of files. However in production the typical amount of files in the zip would be more than 1000 each of them ranging from several KB to 0.5MB. As soon as we discovered the issues we started delving into what could have caused the symptoms: a server that would not handle anymore requests, using 100% CPU and it maximum allowed memory size (-Xmx1024m). We did refactor the complete logic responsible for serving the zip‘s, but to no avail. Locally the new method seemed better: now the zipping would not continue to use resources after the request would be closed prematurely. We also wrote a test that would simulate zipping random files, this also worked, locally.
Locally however, no NewRelic was installed. How could a service responsible for showing problems ever be the cause of the problems, we thought.
It now being past Februari 27, the zip streaming had been enabled for our users. We saw an immediate increase in downtime: the server would hang, we would ironically get an e-mail from NewRelic, and we would restart the server. Of course the logs directed us to the culprit: the initiation of a zip stream was always the last action before the downtime.
This weekend I decided to investigate, and learned all new tricks. Having never done a Heap Dump before this felt quite tricky at first, but in the end it was very easy:
> ssh server “ps x | grep play”
> ssh server “sudo -u play jmap -dump:file=/dump.hprof <processid>”
> scp server:/dump.hprof
> # Open dump with Eclipse’s MemoryAnalyser
Eclipse Memory Analyser is a free tool that is very easy to use. It starts importing your dump file (which is way faster in Run in Background mode!) and then shows really helpful statistics:
After seeing this analysis I was baffled: how could this be? So I searched and found this thread on the New Relic forum from October 2014. More people had this issue! In December they released version 3.12.1 which has the following release notes:
Play 2 async activity is no longer tracked when transaction is ignored.
Reduced GC overhead when monitoring Play 2 applications.
Reduced memory usage when inspecting slowest SQL statements
… they said. So I updated to a new version of the agent. It did not work. The server would still hang caused by the many Transactions stored in the queue. New statistics:
Too bad that the update did not fix the issue. I hope that NewRelic can fix the issue in the future, as I really liked the assurance that you get mails when your server is down, but also being able to drill down into performance issues. By the way: besides the issue still being there, also the apps performance decreased when using the updated agent:
It has been quite some time since the last post on Open Directory on this blog, but I wanted to share something i found out about OS X Mavericks as a OD client. Let’s face it: running a non-Apple server for Apple clients is not your ideal, one-click-solution. It is not officially documented and the blogs are sparse, and more importantly: dated. Most how-to’s date back to 2009 or even older. My own write up dates back from 2009, rewritten in 2011.
A lot has changed since then. The server we tried to mimic was Tiger or Leopard and we tried to hook up clients of the same version. Apple continuously updates OS X and our consumers (or family members) want the latest and greatest features. These new versions like Lion and Mavericks brought some breaking changes. Think about technologies like using AFP in favor of NFS and the now new SMB2 that is not yet fully compatible with Samba, but also the discontinued Workgroup Manager (at least it is not shipped anymore) that I used to model changes from my old Snow Leopard server to my Linux OD’s autoconfig plist.
Some of my family members are experiencing a lot of trouble with their network home directories continuously disconnecting. I tried to overcome this issue by updating to the newest version of Netatalk (3.1) but this did not solve the problem. I also tried to disable Spotlight as is described [here](link needed). No luck. Looking through the log files I found some notice of “too many open files”, but searching this on the web I could not link it to network accounts with afp homes on linux servers.
Previously I did not wan’t our machines to keep local copies of the users home folders as I was worried about the home directories getting out of sync as this behavior can be very hard to explain to my users. When trying mobile accounts my own home folder got out of sync and it was a pain to merge everything back together.
This situation (with a continuously disconnecting home folder) however, was unworkable for the user and I decided to give it one more try. I went to System Preferences > Users and Groups and clicked the button to create a Mobile Account for my user. OS X logged out, and asked for the users password. After that: nothing. Inspecting the logs I found a warning about a missing GeneratedUID field in the user’s metadata. Looking at the LDAP schema I indeed found an apple-generateduid field that all of my users lacked. I added a binding for this field in my OD config and added the field to all my users. I then tried creating a Mobile Account and suddenly OS X started syncing the home folder! Success!
Not all files got synced immediately – the home folder of the user is 10G+ – and after login the home folder was incomplete. Just make sure you sync your users home directory completely or explain them their home still needs to be synced, otherwise they will be scared when they see their empty home directory.
Save the songs in the same .playlist as the currently playing audio-tag.
Save current seek time of the playing audio-tag as it changes.
Save current song in the .playlist, so the index.
Restore the playing playlist on page load in a .playlist-player element if this exists.
This way the audio keeps playing when navigating through the website. When the player works, I’m also going to use AJAX to load the next page and push- and popState to update the URL, for a completely smooth audio experience.
So halfway during development suddenly the audio started crackling. Whenever I paused and played the audio again and again the crackling started to get worse. To test if this was a HTML5 thing I tried a YouTube video, but that audio also crackled. Googling for ‘Chrome audio crackling’ gave me some hints, but when I discovered the solution, that was no where near the solutions of the Google-results.
When developing in Chrome I also tried to view the player in an iPad Simulator (from XCode). This was not working and I decided to make things compatible later, and I forgot about having it open. So I restarted Chrome and the crackling was still there, I cleared cache, nothing worked. Then, I remembered the Simulator, closed it, and audio was back normal! The problem is probably with Core Audio being used by both Mobile Safari and Chrome, but I’m not sure.
You can test this by doing the following:
Starting Chrome and Mobile Safari (in iOS Simulator) on the same OSX device
Load a website using html5 audio-tags and start playing them on both browsers
Toggle the play pause button a few times in Chrome, audio should become bad.
Close the iOS Simulator and instantly audio goes back to normal in Chrome.
It will not always occur as I discovered doing it the 3rd time, but still: hope this helps anyone.
I previously wrote about playing WhatApp through VNC on your Mac, but this wasn’t useful if you didn’t have a phone near. Anyhow, there has been some development in the reverse engineering of Whatsapp and there are also some great emulators available right now, so I would like to introduce to you: Bluestacks running WhatsApp!
Warning: you can only have 1 device connected to WhatsApp at the same time. Once you have entered the verification code on your computer, you can’t use WhatsApp on your phone anymore until you reactivate WhatsApp on your phone. This can only be done 60 minutes after sending the previous verification code. A great way to circumvent this is by using a landline number, since you won’t be installing WhatsApp on your good old DECT-phone.
Start WhatsApp and set it up by entering your Country and Phone number, then wait till the time countdown is up. You will receive a SMS in the mean time, but you can’t enter this since – in Android – WhatsApp will handle this SMS automatically. After the time is up, WhatsApp will offer to call you. Pick up your phone and enter the code the woman reads on your computer.
Ever experienced high process numbers (20k+) in Finder? That’s probably since some process keeps respawning over and over. Some launchAgents fail to start and launchtcl keeps trying to start them. This is filling your hard disk with logs and keeps your hard disk busy writing, preventing it to spin down.
12-07-12 11:11:25,732 com.apple.launchd: (org.postgresql.postgres) Exited with code: 1
12-07-12 11:11:25,732 com.apple.launchd: (org.postgresql.postgres) Throttling respawn: Will start in 10 seconds
12-07-12 11:11:34,034 com.apple.launchd: (com.edb.launchd.postgresql-9.1) getpwnam("postgres") failed
You should probably fix the error that prevents the agent or daemon to start but thats depending on the kind of agent. In my case I didn’t need the agents that were spawning. I didn’t need a PostgreSQL-server and neither the Wiki-server OS X is providing, coming with all kinds of collab* processes. Please proceed only if you don’t need the agent you are going to remove, permanently!
So, how do you stop these annoying little agents? First determine the process-name. Then lookup the launchctl plist files:
$ sudo launchctl list | grep annoyingAgent
You can unload/remove this plist from launchctl by running:
$ sudo launchctl remove 'com.your.annoying.agent.plist'
Please check now that your computer is still functioning. Do this first, as you can return easily until now. Just replace the ‘remove’ with ‘load -w’ to re-add the agent.
When you reboot your machine the processes are sometimes coming back and to permanently disable them run:
$ locate 'com.your.annoying.agent.plist' | while read -r line; do sudo mv $line $line.disabled; done;
When you have this (or your own working LDAP server) up and running your users can login on Mac OS X and Ubuntu and can use their home directories and stuff. But they can’t change their credentials and user info, till now..
When I first made my installer for OpenLDAP I did this for my own convenience. But while doing so I realized the script could be useful for more people and I published it on my blog. Then it became big, and I invited Sean to write for my blog about this project. For Sean and you guys I just made a git repository on GitHub so anyone can edit the installer to be the best (and first) SSO-installer ever.