I had intended to fly from my home to Dublin, before hopping over to Manchester, and then finally on to Sofia before coming back home again.
I was stoked to attend my first ever DrupalCon, to be going back home for the excellent PHPNW, and ending my trip with my favorite conference of 2015, BgPHP. It was going to be amazing.
And I couldn’t do it.
I did a lot of travel last year, and it was amazing. And tiring. I decided this year to do less travel, and I think that objectively, I have. However, this year is the first time my son has also cried when I’ve left, and my travel has been bunched up such that I’ve been away on several long trips.
I had just completed an unexpected West-East-West-East-West (SEA (Home) -> BUF (CodeDaze) -> SEA (PNWPHP) -> JFK (Velocity) -> SEA (Home)) and it was exhausting.
So, as I got up, said goodbye to my family, and loaded my stuff in my cab to the airport, I was anxious about being away for yet another extended period of time.
We started our drive to the airport, and I used my time to share my feelings with some friends via Twitter… and it became clear to me that I was either experiencing, or close to burnout.
Despite my embarrassment, I had my driver turn around and take me home. I cancelled my flights, my hotels, and told my boss. I contacted DrupalCon and went to bed. After I got up again, I contacted PNWPHP, and yet… I still hoped to make BgPHP. I thought maybe I could get my shit together in 5 days or so and still make the event I’d been looking forward to all year. But no… I couldn’t. And so, a few days later I told them too.
For those wondering, my emails were something like this:
Hi Ramelina (and the rest of the BgPHP crew),
As you know BgPHP is my absolute favorite conference in the entire world… however, I won’t be able to make it this year :(
After scaling back my travel due to mental health reasons I’m no longer able to attend :(
I understand that this is less than ideal for you and the wonderful attendees, and I’d like to do anything possible to make it easier for you.
Please let me know if I can be of any help, and if there any costs (such as hotel) that I need to cover.
I’m really sorry :(
And you know what? Everyone was fantastic. Nobody got mad, nobody was upset, they all just wished me well, and I’m sure scrambled to fix the mess I had made. And I am eternally grateful.
And dreadfully sorry.
I am sorry to the organizers, of course, but also to the attendees and wider community. As a Developer Advocate, I would like to be relied on by conference organizers and the community, as a known entity of stability. This is why, for example, I always let organizers know I have other talks I can give if someone else can’t speak for any reason. This is why I try to always get in early, and I work so hard on my slides.
I’m currently writing this from Berlin, and in truth, there was still a bit of anxiety leaving for this trip (with Akamai Edge in San Francisco first) — but, I’m glad I did it, if only to prove to myself that I still can.
Thank you, to every one who reached out to me when they heard I wasn’t going to make it, and in person this week, it meant a lot to me.
I will try to do better.
Update: 28 Oct, 2016
The Lead DrupalCon Coordinator (Amanda Gonser sent me this (published with permission):
Just saw your blog and wanted to say that I am happy you took care of yourself. I am glad to hear you’re doing better but still getting out there! We look forward to having you at another DrupalCon in the future if you’re up for it.
Thank you so much Amanda, I still hope to attend my first DrupalCon soon!
This post is over 2 years old and is probably out of date.
I have been speaking about HTTP/2 for almost a year now, and throughout that time I have stated over and over again:
As a community, we are still trying to figure out best practices. The examples I’m showing here, are ideas. Nobody has figured this out yet. We [application developers], web server vendors, and browser vendors, all have a role to play in exploring HTTP/2 and informing each other about what we find.
I want to make clear that while this post talks about PHP specifically, the issues are present in most server-side languages that use FastCGI or similar models.
When I first started to explore HTTP/2 Server Push, my first thought was that PHP would not be able to do server push. PHP has a single output buffer, and its interface (the Server API, or SAPI) with the web server is built around that — whatever is output to the buffer, is then served to the end-user.
Exploring further, it became clear that those discussing and trying to figure out how to deploy the HTTP/2 spec have solved this issue by using Link headers.
This would then inform the web server and it would then be responsible for making sub-requests and pushing the results out to the user. By doing this, we avoid the issue of PHP having a single output buffer, by treating each push as if it were a unique incoming request. PHP is none-the-wiser.
Why We Need a New SAPI
Most of the existing exploration with HTTP/2 is focusing on websites (or web applications), rather than APIs. These two applications can vary a lot in their performance needs.
An API, however, is typically comprised on discreet resources with references to one or more other discreet resources.
While they may seem similar on the surface, the difference is that an API is surfacing a data-structure, while a webpage is a single flat document.
The APIs data-structure is often based on a datastore that supports efficient fetching of related resources.
Take the example I use in my talk, a blog API. A blog post might be comprised of:
The blog post itself
The author information
The comments author information
We can imagine a couple of SQL queries like this:
Which, in a perfect RESTful world, would result in a number of seperate resources, each with their own URL (and therefore, separate request):
Due to the current world of HTTP/1.1, we likely would at best split this into two resources (which happen to match up with our SQL queries up there) with each of the sub-resources embedded like so:
── post resource with author resource
└── comments collection with each comment and author resource embedded
More often than not, we’ll just flatten the entire structure to a single post resource.
This is a trade-off we have to make in the name of performance.
If we wanted to move towards the first model, we would end up having to do many small queries at each layer of the structure, which could be very inefficient, or duplicate effort — especially if we need some of the sub-resource data to generate the resource URLs (think: pretty URLs using the authors name for author resources).
So, what do we do? We can cache the intermediate information for later retrieval by the web server sub-request, or we can write a SAPI that supports responding with the request resource, and subsequent pushes.
This however needs web server support.
Currently all SAPIs are based on the original CGI single request/response model. We need to move beyond this.
We need a new web server interface that supports multiplexing from the application layer, and we need PHP to be able to multiplex it’s output.
Additionally, we are going to want to control other features available in HTTP/2 dynamically for those multiplexed streams, such as stream weights and dependencies.
That Sounds Hard!
To do this would require a large effort on the part of many projects — on the scale of creating the original CGI spec, bringing to together web server vendors and language authors to decide upon a standard way to handle multiplexed communication.
We also don’t know how effective having these abilities would be.
Browser vendors are still figuring out the best practices for handling what is now a much more complicated priority tree, and re-building rendering around it.
Because it’s difficult to do so, there’s few sites taking advantage of these features yet for them to make anything more than an educated guess how to do this.
New Application Architectures
Additionally, we’re going to have to explore new application architectures, that feature asynchronous and parallel processing to create and output these multiplexed streams.
Introducing The HyPHPer Project
For the last few months I’ve been looking at the Python Hyper project, a series of libraries for handling HTTP/2. These libraries are for building concrete HTTP/2 clients and servers upon, and do not have any I/O — making them framework independent.
I have decided to try and port Hyper to PHP, as HyPHPer.
The goal is to provide a base for writing both servers and clients to explore both writing applications that can handle multiplexed responses, and documenting current browser behavior and performance implications of different response profiles.
We can then attempt to determine current best practices for performant web applications.
During the PyCon AU sprint days I managed to port the Hyper HTTP/2 frame implementation (hyperframe) entirely to PHP — including tests.
This post is over 2 years old and is probably out of date.
At almost every event I’ve spoken at in the last twelve months, somebody has asked me how I created my slides. Typically they are talking about my code samples, but I wanted to cover all aspects of creating my slides.
You Are The Star, Not Your Slides
It’s often said that you, as the speaker, should be the star of the show. Your slides should be inconsequential, a supporting act. If the projector dies, your audience shouldn’t miss out on much.
While I strongly agree with the sentiment — after all, you’re paying to see the speakers — I disagree that your slides don’t matter. Your slides can be a valuable part of the experience for the audience.
I try to hit as many of these as possible with my talks, I use photographic title slides to give visual learners a keyframe to hang on to, as well as bright colors and diagrams. I speak out loud for aural learners, and often my talks are recorded. Physical learners can take their own notes, and I encourage questions for the social learners.
There are two different presentation techniques I see commonly used.
As some of you may know, I have a hearing problem. Before I got hearing aids, going to a talk was hit-or-miss. If the speaker or the PA was too quiet, all I had to go on were the slides.
Due to my hearing loss, I always (even now, with hearing aids) watch TV with subtitles/closed captions on — I find that if you put too much content on your slides, they start to become bad subtitles for your speech.
I get distracted by differences, and I focus more on the words on the slide than what you’re saying. Combatting this scenario is why the advice de-emphasizing your slides is often given.
To counter this, some speakers have started to do slides with just images/gifs, sometimes adding a word or a title. This can be entertaining in person, but I get frustrated when I hear about a great talk and the slides end up being 25 pictures of cats with random words — there is no take home value, and nobody is going to retain everything you say.
Splitting The Difference
I try to blend these two techniques by having a title slide that I talk to, and then behind it I have bullets with more details that I usually skip passed during the talk. This slide serves three purposes:
Speaker notes: I set Keynote to show only the upcoming slide in the speaker view, and I use the bullets as speaker notes
Clarification: I can switch to that slide to clarify something, e.g. a class name
Take home: when I upload the slides, I leave those in, they summarize everything I’ve said
The most important thing for a slide is legibility, so I spent a lot of time picking fonts. I have gone with the “Source Pro” family from Adobe, these comprise of Source Sans Pro and Source Code Pro. There is also the most recent addition to the family is Source Serif Pro, but I try to limit my slide to two fonts.
One of the benefits of these particular fonts is they come in a bunch of weights. This gives the ability to emphasize (semi-bold, bold, black weights) or de-emphasize (extra light, light) portions of my text.
These fonts also hold up well at font sizes small and large, and in print. I use Source Code Pro every day in my IDE, Editors, and terminals, and have blown it up to fill a wall on a projector.
In fact, I use these same two fonts on this site, Source Sans Pro for titles and body text, and Source Code Pro for code (I override the default gist stylesheet for this!).
I try to make sure text is no smaller than 60pt, but will go down to about 52pt if necessary.
The most important thing about colors is contrast. I have found that on-the-whole, bright colors on a dark background stand the most chance of being visible on projectors, but I did have to invert my slides for one conference this year.
I ended up finding the monokai theme that is available for most every highlighting tool, IDE, editor, and terminal out there. The monokai theme consists of five colors on a grey background, to which I added the Akamai blue and orange (branding++), and white for titles/text.
I use these eight colors for all code, graphs, and diagrams in my slides:
As mentioned, I use images on my title slides. I generally go for images that are visually striking, and are in some way related to whatever topic the slide is about. For example, for HTTP/2 Server Push, I used a picture of a push mower (and in fact, replicated that in this post on server push).
I find most of the images I use by using the Flickr advanced search. I set license to “Commercial use allowed” (I find conference talks a grey area and decide to err on the side of caution), ensure safe search is on, and change the sorting to “Interestingness”. Sometimes I will also open the Advanced panel and choose only portrait orientations and large images.
If this doesn’t yield usable results, I will fall back to Google’s image search. Here, you can also set your license under “Search Tools > Usage Rights” where I choose “Labeled for reuse”, I set “Type” to “Photo”, and “Size” to “Large”.
I find this typically will yield a visually striking image that I am free to use so long as I follow the rules laid out by the (typically Creative Commons) license.
I will then place them on my slide and change the transparency — due to the dark grey background behind this will darken the image, until I feel there is enough contrast between the image and the title text
Lastly, the code. I do try to limit the amount of code I put in my talks, I don’t think slides are the best medium to learn to code from. This does mean that I try to make my code as easy to consume as possible.
As I mentioned, I use the monokai theme (or monokai-light for the inverted one), and I use the python Pygments library to perform the actual highlighting. Pygments has the ability to output RTF formatted output, so I have a function in my bash setup that will take the contents of my clipboard and replace them with a syntax highlighted version of the same text:
This function will take the first argument to specify the language to highlight (defaults to PHP), will set the “startsinline” option if necessary. It then pipes the output of pbpaste into pygmentize, passing in flags to set the format -f rtf, and then sets the theme (style), font (fontface) and font size (fontsize) options with the -O flag: -O style=monokai,fontface=Source Code Pro,fontsize=120. The fontsize is double whatever the pt size would be, as I mentioned earlier, I try to use 60pt, so it’s set to 120.
Lastly, it pipes the output to pbcopy allowing me to just Cmd+V and paste it back into Keynote.
I still call it like key for PHP code, or say key json for JSON instead.
I also use animation to highlight code as I step through it — this is what I think makes most people ask what tool I use.
You can see an example here:
The purpose of this is to ensure that the audience can follow along as I explain the code. I will also show larger blocks of code and zoom in and pan around, I believe this helps maintain context in larger code examples.
I do this entirely in Keynote. This effect is created primarily with three shapes, the first two are both black rectangles set to 65% opacity for obscuring the upper and lower parts of the code, while the third is transparent rectangle with a white border, allowing me to “ring” the code I’m highlighting.
I used to “hand” animate each stage of the animation, painstakingly adding animation “Actions” to move and resize the three independent elements, but I have now come up with a better technique: Magic Move.
Magic Move is a slide transition that will cause Keynote to automatically create an animation between matching elements on two different slides. So now, rather than have one slide with 30+ different animations, I duplicate the slide, move the elements where I want them to end up for that step, and turn on Magic Move for the previous slide:
This drastically cuts down on the time it takes to do the animations and the animations end up being smoother as a result. I no longer dread wanting to change the code on any given slide.
My typical mantra when it comes to live demos is: don’t.
But, there’s something entertaining about a live demo, and I’ve never met an audience who wasn’t forgiving about issues. My hard and fast rule is to never do a demo that requires internet connectivity.
When giving live demos I always make sure I have a screencast of the same demo, as well as screenshots — worst case scenario I can flip through slides and give the play-by-play.
More recently I’ve been building my demos inside docker containers; this gives me a simple and easy way to manage them, and ensure I’m getting repeatable results every time.
Time & Effort
I put 120+ hours into every slide deck I create, and I continuously tweak them before every presentation based on the previous rendition. I strive to make them as entertaining and educational as possible for the live audience, while also being informative for the passive audience.
Not Without My Slides
While I can give my talks withouts my slides — everything I say in my talks has at some point been part of a conversation at least once I’m sure — I truly believe that they enhance my ability to teach dramatically. At the end of the day, the goal is teach something, and as we learned above, different people learn in different ways.
So, yes, I can give the talk without my slides, but I’m potentially leaving people behind if I do.
This post is over 2 years old and is probably out of date.
In my last post on Building a Community Presence I went over some of the approaches for engaging communities. This time I want to talk about what the actual job of a community builder is.
The role of the Community Builder is to sell you on the idea of the company and the product. Not necessarily to sell you the product. Often times what we’re selling are not bought without some decision making process behind them, so a sale is not going to happen then and there anyway. Provide the education, and build the trust, and the sale will happen.
Why I Hate Evangelists
I personally have a visceral reaction to the title “Evangelist”, which is unfortunate because it’s on my business cards.
The issue I have with the title is that when I think of Evangelist, I think of a fanatic who will use whatever methods they can to convert you to their way of thinking. And yes, there are Developer Evangelists who will do this. They are bad community members in my opinion, and I don’t trust their opinions.
The truth of the matter is that there is no single solution to any technical problem, and anyone who tells you differently is lying.
I prefer the “Advocate” title, or better yet, the one I chose for myself at Engine Yard, “Community Engineer”.
Advocating For The Company/Product
Being an Advocate is important, advocating for something in my mind means to put its best foot forward. This meant that while at Engine Yard, I probably sent more potential customers to our competitors, because I knew that had they tried our service they would’ve been disappointed for their current needs. Engine Yard is an amazing platform, but it’s complex (due to its customizability), it often requires some changes to “cloudify” your apps, and it can be comparatively expensive if you’re at the small end of the scale.
If you fall into the segment of the audience that doesn’t need the customization and support, then you’ll find the product over-complicated, and probably, over-priced. That is not a positive experience, go use Heroku instead (for now). If you fall into the other segment, then you’ll find Heroku to be constraining and expensive instead, go use Engine Yard and you’ll find it to be the right product for you.
I believe that honesty plays a big role in advocacy. If people are to trust what you say, you must be able to not only point out the flaws in your competitors, but your own as well. If you fail to do so, and you knowingly “sell” a customer a product that isn’t a good fit, they will have a negative experience and that’s what they will broadcast to their peers — that, to me, is an anti-sale. You stand to lose far more business that way.
If you instead sell them on the right product, even if it’s not your own, while at the same time educating on both products, in the future they will seek you out, and self-select into being a customer. At the very least, they will seek your advice again.
This still falls under the megaphone part of the role. And is probably the less valuable part of the role to your employer. They, realistically, can have similar results using traditional, informed marketing channels. And those flights are expensive.
Advocating For The [Potential] Customer
The most important part of the role is advocating for the customer, or potential customer. This is why I like the title “Advocate”, it cuts both ways. You are an advocate both to, and for, your customers and potential customers.
As an advocate, you are the person (or one of the people) closest to your customers. You attend the same events, you go out to eat and drink together, you attend the after parties together, and you commiserate the early morning afters. You also have conversations, about life, news, the weather, and work.
You hear their gripes and you are the conduit for upcoming news that affects them, negatively or positively.
I firmly believe that the most vital part of the role is creating a feedback loop, directly from your customers to your product team. Your job is to aggregate requests (“I’ve been 14 events so far this year and every time someone has asked about X”), spot trends (“Lots of people are starting to talk about Y, here’s what it might mean for us”), and raise concerns (“People find this part of our product confusing because Z”).
If your community team does not have a direct line back to your product team you are missing out on the most valuable information they are collecting.
Advocating For The Community
As mentioned in the previous post, as part of your role building relationships, and as the person most entrenched in the community, you should be working directly with marketing on messaging and interactions.
You should have the power to veto any messaging your company puts out to your community — after all, your knowledge of the community is one of the reasons they hired you. It’s in the companies best interests to have you invested in their messaging: you are the one on the ground who has to deliver it after all.
Advocating For Yourself
At the end of the day, being a community builder is a job. I personally won’t take on the role unless I believe in the product, because I want to enjoy my job, I want to help people do their jobs better, and I can’t do either of those things with something I don’t use, or don’t care for.
This is also part of the reason I didn’t look for new roles in the same space when switching jobs, it feels disingenuous to the community to “sell” them on one product one week, and then on a competitor the next. My paycheck doesn’t buy my loyalty, the company values and a product I believe in do.
I hope this is obvious when I interact with people, and I believe it’s a necessity to be good at this job. We must maintain our intellectual honesty and community integrity. Your reputation is your true value, for your company, for your community, and most importantly, for yourself.
This post is over 2 years old and is probably out of date.
Your company has decided it needs to “do community”, whatever that means, you’re community manager 1 number one, what now? From my time as a developer advocate/evangelist under both marketing and engineering teams, I have come to some conclusions about how to build community presence. Though my experience is mostly with technical communities, this should apply pretty well to any community building.
Identify Your Products Potential Audience(s)
You’ve probably already done this to some degree, after all, to build a successful product you need to know — and understand — who you are building it for. But, if your target audience is “developers” that’s marginally better than saying your audience is “humans”.
Chances are your product is going to fall into one of two categories: a tool built for use with a specific language (e.g. the excellent greenkeeper.io for keeping your node.js npm dependencies up-to-date), or one built for most every language (e.g. Travis CI for continuous integration).
If you fall into the former category then you’ll have a much easier time narrowing down the identity of your audience, your product itself defines it. For the latter, you need to recognize that you have multiple (albeit similar) audiences, and working with, say, the PHP community will be different to the Ruby and Java communities for example.
There are, in my mind, three approaches to building your community team:
Breadth First Evangelism
If you have multiple communities you wish to target, you may decide that the most important thing to do is to get your name and your product out there in front of as many appropriate eyeballs as possible.
In this case, you do not have the bandwidth to forge deep, meaningful relationships with every community in which you want a presence. You cannot physically be everywhere at once!
These kind of superficial touches will be things like conference sponsorships that involve little more than plunking down some cash and getting your logo/booth at an event, maybe you sponsor an open bar, or an unconference track. You give away the pens and the t-shirts, and the other branded swag, and you leave.
You write a bunch of high level blog posts, you make sure you have docs that have examples in a multitude languages, and you go home, job done.
This is a great way to start, but mostly superficial. You will not identify your community advisors, you will not create long lasting relationships, and you will get less valuable/actionable feedback.
What you should be doing is trying to identify communities in which you want to foster deeper relationships. Which communities you either want to pay special care too because they need more of your time to understand your product, or better yet, the ones that are getting traction and starting to provide interesting insights, use cases, and feedback. This leads to the next stage:
Depth First Evangelism
If you have the second type of product (i.e. specific to one community), or, you’re ready to start deepening the relationships you have started to cultivate through your breadth first evangelism efforts, it’s now time to start concentrating your efforts – and expanding your team.
This is when you will want targeted evangelists: a PHP Evangelist, a Ruby Evangelist, a Python Evangelist. These people bring their knowledge of a particular community to bear in determining what your messaging should be and how your interactions are orchestrated to achieve the community relationships your product needs.
Now you will start to do things like working with a conference organizer to create a sponsorship that provides much more value to the attendees, and can actually shape the feel of an event. You will start to sponsor open source work that is relevant to your company, and you will sponsor initiatives like diversity groups, and scholarships. You will make meaningful contributions to the community.
In return, you will get the insights in to your product that help your improve it, you will find your most hardcore users, your beta testers, and your contributors.
You will continue the blog posts, but they will be much more in-depth and targeted, based on feedback from the community about what they need to know, rather than what you think you want to tell them. Your documentation too will follow suit – perhaps you put together a book around using your product as members of that specific community.
There is an alternative to depth first and breadth first, which is to localize your efforts:
By constraining your geographic area, you are reducing your audience in the same way as in depth first evangelism, but you can still hit many different communities within that geographic area. To be clear here, I’m talking about an area which can be traversed within a day.
This allows the kind of deep relationships that add real value, but across a broader section of your audience.
This hybrid is potentially easier to scale, it certainly cuts down on travel costs, but, you are less able to take advantage of your existing community presence (which is likely limited to one or sometimes two communities) or your breadth will be biased in those directions.
It’s Just The Megaphone
This is what I call the megaphone part of the role, the evangelistic, outward pushing part. But it’s just one part of the role. While it’s the most visible, I’d argue it’s not the most important. In the next post, I will cover the other aspects of the role that make community engagement invaluable.
This includes different types of callout boxes (tips, notices, warnings, etc), sidebars, includes, custom attributes, cross references, and more.
My favorite feature is the ability to include portions of external files. This allows me to write syntactically valid PHP files with open tags, that I can then test by passing it to php -l and/or running it (depending on what it does!), but by including specific lines I can omit the opening tag in the final output, or I can bring in specific lines and write about them as I go.
You can also add callout indicators. This works by adding <#> (where # is an integer starting from one), and if you precede it with a comment (e.g. // <1>) it will strip the comment syntax when rendering. You then use the same syntax
For example, given the file code.php:
I can include it, omitting the opening tag, and adding an annotation using:
Now, O’Reilly Atlas, the SaaS tool from O’Reilly for publishing books does some awesome things utilizing AsciiDoc, such as building the final book in multiple formats etc (and I highly recommend checking it out, you can still publish the end result on, say, LeanPub), but most of the features are possible locally using a tool like AsciiDoctor — which is the most popular implementation of AsciiDoc rendering, supporting HTML5, DocBook, ePub, LaTeX, and PDF.
OK, Enough About AsciiDoc, What About Atom?
Alright, so, what about authoring your content? One of the reasons I looked at Atom was that while Markdown has a multitude of available editors, AsciiDoc options are lacking. I started using TextMate with the AsciiDoc bundle, but I wanted live preview, so I switched to PhpStorm with the AsciiDoc plugin, but it didn’t seem to support the full feature set of asciidoctor, and it’s a bit overkill for the task.
For my latest attempt to find authoring nirvana, I’ve moved on to Atom, and what’s what you’re all here for.
Ten 13 Must Have Packages for Markdown and AsciiDoc Authoring in Atom
First the Markdown packages:
markdown-writer adds some conveniences to the authoring of Markdown, such as automatically continuing a list when you hit enter, and support for easier Jekyll authoring.
2. linter-markdown (and linter)
A linter for Markdown syntax
This one is a must have, it will sync the Markdown preview (Ctrl+Alt+m) scroll position to the current scroll position in the document
Next, the AsciiDoc packages:
AsciiDoc Assistant is actually a meta-package for four other packages.
Syntax support and snippets for AsciiDoc (asciidoctor flavor), this is a requirement.
Autocomplete for AsciiDoc syntax. I don’t use it much, but it makes some of the more verbose blocks easier to type.
An AsciiDoc version of the Markdown preview. Live updating, using and supporting the full range of asciidoctor features, and specifically, it supports includes.
This is as much a necessity as the syntax support.
When pasting an image into an AsciiDoc document, automatically save it to a specified path and link to it.
Lastly, there are non-markup specific packages that make general writing in Atom much nicer:
A distraction free writing, full-screen writing environment. Hit Cmd+Ctrl+F on OS X, or shift+F11 on Windows and Linux, and it will hide most of the editor UI, including tabs (by default), status bar, file browser, etc.
It will also center your document and can automatically turn on soft wrap (which I recommend).
It works well with both Markdown and AsciiDoc preview panes, and also the Markdown document outline from the markdown-document plugin, and the linter.
I have made one small change to my styles.less to force the status bar to be visible even in Zen mode.
This addon will simple automatically enable soft wrap based on the file extension. You can add .adoc, .asciidoc, and .md to the settings to make it apply to those files. auto-soft-wrap is not compatible with the Zen plugin, but it’s own soft wrap setting works fine (it always applies though).
2. linter-write-good (and linter)
write-good is a naive linter for English prose, informing you of ways to improve your writing.
linter-write-good brings it into atom UI, using the linter interface it will highlight use of passive voice, words that weaking meaning, and words that was wordy or unnecessary.
For an example of how this can impact your writing, this article was originally written without it, and then re-written following it’s suggestions — you can see the diff with the original article here for comparison.
The last addon is the reason I keep the status bar visible in Zen mode, it shows you your word count, but also the grade level and Flesch-Kincaid Reading Ease scores. This helps to ensure that my writing isn’t unnecessarily complicated.
These plugins are available via the Atom UI, or apm. Hopefully these will improve your Atom authoring experience!
This post is over 2 years old and is probably out of date.
Just like the programming languages that are the centers of our communities, each community has its own set of rules and idioms — they have a real life syntax.
In my (almost) three years of being in a developer relations type role I have attended events in several communities and have observed how they differ from my primary community (PHP). As I’ve tried to understand those differences and the reasons behind them, I have had many discussions with members of many more communities also.
After attending my first PyCon (US) I was struck by just how welcoming and diverse the community is and had many conversations trying to understand why this is. This is not what this post is about. This post is about conferences specifically, and how communities place different priorities on different things when it comes to how they run, organize, speak at, and attend events.
The first thing I found that was unique about the PHP community is that every event which has an open CFP (e.g. not a local community day, hackathon, etc.) will reimburse at least some travel and lodging costs.
Larger community and for profit events will typically cover all flight costs and at least (number of talks)+1 nights hotel. Some will limit the number of international speakers to keep costs manageable, while others will have a fixed amount of costs they can cover, and the rest is up to the speaker.
In fact, the reason I started speaking at conferences was because of this, I would never have been able to afford to attend otherwise.
These may be paid up front with the conference booking the flights on your behalf, or they may be a refund after the fact.
In the last few years the smaller community events have (probably because it’s a default in OpenCFP) added checkboxes to their CFP which allow you request help for travel and/or accommodation. I think this change has come about as the number of speakers whose jobs are willing to pay their way has increased.
Additionally, you would get a full ticket to the event. Speakers do not get paid anything additional such as an honorarium.
Due to these costs incurred by events, ticket prices are usually somewhere from $150-350 with tutorial days at additional cost. A few events are much more expensive (~$1000+), but are typically not aimed at the community crowd.
Another thing I’ve noted is that it’s common for speakers to submit multiple talks to conferences and this is highly encouraged to people who wish to get a talk accepted: never submit just one!
As for the talks themselves, they are usually minimum 40 minutes (which is rare) and maximum 60 minutes. 50 minutes is the most common format.
The community which I spend the most time comparing (mostly with PJ) is Ruby. The first major difference I noticed was that there were no speaker packages. I was outraged by this, it felt like the equivalent of “it’ll be good for your portfolio”.
Then I found out why this was: the ruby community values accessible ticket prices, and wants to have a low price point to make the event feasible for as many people as possible, including students, single parents, etc.
This would be impossible if speakers travel/hotel were paid for by the event.
Speakers are given a ticket to the event, and I’m sure for some its reason enough to speak, while others do it for the exposure and networking opportunities. But additionally they seem to do it to contribute back to the community as well.
Also, when it comes to CFPs, numerous people I spoke to early on were aghast to learn that in the PHP community we submit multiple talks. However it seems that this is changing, and it is becoming more common to do so.
Talks are usually 25 minutes, with the max being 35 minutes. Some are as little as 20 minutes.
The Python community seems to be somewhere in the middle. While Python events do not cover a speakers flight and hotel, the PSF has a formal grant program which allows for those unable to afford to attend to request financial support to do so.
And while ticket prices are not as cheap as ruby events, they do have multiple levels of ticket prices, for example students, regular, and business tickets. For PyCon US, those were $125, $300, and $600 respectively. And everyone buys a ticket, including the event chairs, volunteers, and speakers. Again the PSF grant program can help here.
Submitting multiple talks seems to be the norm. And talks are 35-40 minutes long.
I have only recently started discussing this topic with members of the Perl community, but they seem to also value low cost tickets, speaker costs not covered, and 35-40 minute talk slots. I may be wrong on this but that’s what I understand to be the case. I am unsure if tickets to the event are given to speakers, or how CFPs are typically done.
There’s No Right Answer
It is obvious to me that community is hugely important to everyone I’ve spoken to about this, and I find it interesting how different communities have different priorities resulting in different conference experiences.
I will say that the Python way seems like a much more democratic and inclusive way to handle things, but that it depends on the PSF which is a core part of its community and requires monumental effort, oversight, and tons of volunteers. As an aside, the PSF also seems instrumental in the inclusiveness and diversity exhibited by the Python community.
At the end of the day, I think we all have valid motivations. The PHP way ensures that regardless of your financial situation, if you have something valuable to share then you can do so, while the Ruby way ensures that the most possible people can attend to hear what is being shared, and the Python way tries to strike a middle ground.
I do find it interesting how consistent things seem to be within communities, perhaps simply through early leaders setting the expectations, but it does show some of the differences of opinion on how to build a thriving community — and yet even with these differences, they are all large, thriving communities.
This post is over 3 years old and is probably out of date.
A lot has been said about when to use public/private/protected. Some say that you should never use private, while others say you should never use protected.
About the only thing that people can seem to agree on is that public should be used with caution. That your versioning should be based around your public API, and you have a responsibility to maintain that API.
The issue is mainly around the maintenance responsibility of protected vs private. Given that other developers can depend upon protected, it can effectively be considered to have the same overhead as your public API.
I personally very rarely use private properties, and I default to protected.
My thinking is thus:
The public API is sacrosanct, following SemVer, breaking the public API shall bring a new major version. I will do my best to support an existing public API for as long as possible and only break backwards compatibility as a last resort.
The protected API is important. It does not impact SemVer, though often accompanies changes at the X.Y level. However I pledge to document any changes in release notes and otherwise, to ensure that if you pay attention you will be able to track changes and avoid issues. I will make changes as necessary to the protected API. (n.b. as I write this, I’m considering making it a rule that X.Y should be incremented for protected changes, always)
The private API is the wild west, and you’re on your own.
I think this is a pretty pragmatic approach, it preaches caution and engagement when extending third party libraries, yet does not needlessly prohibit the ability to extend entirely.
This post is over 3 years old and is probably out of date.
I’ve now spent the better part of four months looking at HTTP/2 (or H2 as all the cool kids are calling it) and in particular Server Push.
Server Push is super exciting to me, paired with multiplexing — the ability to perform multiple HTTP transactions on a single TCP connection — I believe it has the potential to change how we deliver the web by allowing the server to push content to the client (specifically, into the client cache) proactively.
How Server Push Works
To explain Server Push, you must first understand HTTP/2.
HTTP/2 requests are sent via streams — a stream is the unique channel through which a request/response happens within the TCP connection. Each stream has a weight (which determines what proportion of resources are used to handle them) and each can be dependent on another stream.
Streams are made up of frames. Each frame has a type, length, any flags, and a stream identifier, to explicitly associate it with a stream.
The ability to interleave these frames, and the fact they can belong to any stream, is the basis of multiplexing.
One of those frame types is the SETTINGS frame, which is how the client can control whether or not to use Server Push.
Server Push can be enabled (default), or disabled by sending the SETTINGS_ENABLE_PUSH (or 0x02) flag.
When it is disabled, the server should not send any pushes. When it is enabled, a push is started by sending another type of frame, known as a PUSH_PROMISE.
The purpose of this frame is to inform the client that the server wants to push a resource, and to give the client the option to reject the pushed stream (by sending back a RST_STREAM frame). Each pushed resource is then sent in it’s own stream to the client and should be stored in the client cache — it will then be retrieved from the cache when it is requested by the client rather than fetched from the server.
Pushed resources must be cacheable as it is required to still be fresh when the actual request occurs. This means they should be the result of idempotentGET requests.
That’s Cool and All, but it’s Not Revolutionary…
As I said, I think Server Push and Multiplexing can change the web.
In the near term, we can start to simplify our web setups; multiplexing obsoletes domain sharding (in fact, sharding can be a detrimental practice, though not always), as well as a number of frontend strategies for performance tuning, such as inlining of resources for above-the-fold rendering, image sprites, and CSS/JS concatenation and minification.
Thinking longer term, we will start to see new strategies emerge, such as pushing the above-the-fold JS/CSS as separate resources with high priority along with the requested HTML, followed by the rest of the CSS/JS with a lower priority.
Or making webfonts dependant on the CSS file in which they are being used.
But The Web Isn’t Just Websites…
Another casualty of HTTP/1.1 is APIs. APIs often have to make the choice between combining sub-resources into the parent resource (sending more data than necessary if they are not be wanted, slowing down the response), or making many more requests for those sub-resources.
With Server Push and Multiplexing, the server can push those sub-resources along with the request for the parent resource, and the client can then choose to reject them if it doesn’t want them.
Alright, but what do you mean we’re doing it wrong?
Currently, the most popular way to do server push is for your application to send Link: /resource; rel=preload headers, which will inform the http server to push the resource in question. However, this format is defined by the new W3C Preload Specification (Working Draft), which is not intended for this purpose (although there is some disagreement).
The purpose of the preload Link is for a browser and:
provides a declarative fetch primitive that initiates an early fetch and separates fetching from resource execution.
It is related to the (so-called) Prebrowsing features — which allow you to instruct the browser to do a number of things to improve the performance of sub-resources (everything from pre-emptively doing a DNS lookup, opening a TCP socket, to fetching and completely pre-rendering the page in the background).
I like the solution of using headers to initiate pushes. This makes it something that can easily be done in non-async/parallel/threaded languages (e.g. PHP or Ruby) — with zero language changes necessary — and pushes the responsibility up to the HTTP layer.
Unfortunately, you run into a potential issue of being unable to distinguish between preloading and server push; and you may wish to use both for different assets — for example, you might want to use prefetching for your stylesheet, which when retrieved could have it’s fonts and images pushed. Furthermore, using preload for pushes could introduce a race condition between the a push and a preload for the same resource.
We don’t want to clobber the Preload specification, so: why not just change it to Link: /resource; rel=push.
By doing this we add enough granularity to distinguish between the two, and avoid a potential race condition. The header would be stripped by the httpd when it handles the push. If the client does not accept pushes (which the server knows thanks to the SETTINGS frame) the header should be passed through as-is (or can be changed to rel=preload) and the browser can then handle it as a preload instead.
If neither preload or push is supported then the asset is requested using the traditional methods (e.g. link, script, and img tags, or url() within stylesheets) this allows for a simple, robust, progressive enhancement mechanism for the loading of assets.
This post is over 3 years old and is probably out of date.
Please see the updates below
This was going to just be a tweet, but I felt it needed more than 140 characters.
If you know me in person, you’ve probably met me at a conference or user group. It should be immediately apparent that I am a confident, outspoken person (at least publicly).
I’m a former PHPWomen US Lead, as well as co-organizer for Prompt, raising awareness about mental health in technology, I’m no stranger to speaking out on tough subjects.
Furthermore, I’ve been a contributor to the PHP community for 15 years, and an internals contributor for half of that (since PHP 5.3.0).
The fact that I — as that person, and as a member of the majority makeup of the PHP community — don’t feel safe contributing back to the Code of Conduct discussion in favor of it — should tell you everything you need to know about why we need one. This is due in part to the hostility from some people involved, and to possible backlash from the wider community.
So let me state here, for the record, in unequivocal terms:
I fully support a Code of Conduct, and a reporting infrastructure in which we can act to remove those who are unable to abide by the Code of Conduct from our community.
This should be applied to all mailing lists, all websites (e.g. comments in the manual, news items, etc.), and all social media put out by the PHP project itself.
It should also apply anywhere a contributor is using their @php.net email address — whether thats a private email conversation, or at a conference/user group where you put it on your contact slide. Obviously while the person could not be removed from the event, they can be restricted from contributing based on their behavior.
Also: thank you Anthony for bringing this RFC to the table.
This has gotten quite a bit of support and a couple of questions, so I wanted to add one specific thing:
To clarify, I don’t support the Contributor Covenant as it stands as the CoC for PHP; I think it is a good starting point, but it is just that, a starting point. Additionally, regardless of the content of the CoC, what matters more is documented procedures for handling reported violations.