Lessons from Mark Manson’s “Models: Attract Women Through Honesty”

I picked up Mark Manson’s Models: Attract Women Through Honesty” because I was intrigued by the title and because I am at a time in my life where I’d like to meet more interesting women. It did not disappoint. The book was insightful, and, like other compelling reads, it pushes me to look hard at myself and how I’ve been living my life, this time particularly on the subject of women. The concepts of neediness and vulnerability are, for me, the main takeaways.

Here are just a few of the noteworthy lines from the book:

  • In our post-industrial, post-feminist world, we lack a clear model of what an attractive man is. Centuries ago, a man’s role and duty was power and protection. Decades ago, it was to provide. But now? We’re not quite sure. We are either the first or second generation of men to grow up without a clear definition of our social roles, and without a model of what it is to be strong and attractive men.
  • Seduction is an interplay of emotions. Your movement or lack of movement reflects and alters emotions, not the words. Words are the side-effect. Sex is the side-effect. The game is emotions, emotions through movement.
  • In surveys among literally tens of thousands of women, across all cultures, ethnicities, age groups, and socio-economic standing, and even time periods, there’s one universal quality in men that they all find desirable: social status and access to resources. The amount in which they desire it varies from culture to culture and from age group to age group, but the desire for it is universal.
  • Social status is determined by how you behave around other people, how other people behave around you, and how you treat yourself.
  • Female arousal is somewhat narcissistic in nature. Women are turned on by being wanted, by being desired. The more physical assertiveness you pursue a woman with, the more aroused she becomes — sometimes even if she wasn’t interested in you to begin with.
  • How attractive a man is is inversely proportional to how emotionally needy he is. The more emotionally needy he is in his life, the less attractive he is and vice-versa. Neediness is defined by being more highly invested in other people’s perceptions of you than your perception of yourself.
  • All people eventually return to their baseline levels of investment. And until one is able to permanently alter his baseline level of identity investment in themselves, they will continue to attract the same types of women, and end up in the same failed relationships. Permanent change to one’s investment and neediness in their relationships with women is hard and a process that encompasses all facets of one’s life. But it’s a worthwhile journey. As a man, it may be the most worthwhile journey. And the key to it is probably something you wouldn’t expect. In fact, it’s something that most men turn their nose up to when they hear it. It’s vulnerability.
  • Making yourself vulnerable doesn’t just mean being willing to share your fears or insecurities. It can mean putting yourself in a position where you can be rejected, saying a joke that may not be funny, asserting an opinion that may offend others, joining a table of people you don’t know, telling a woman that you like her and want to date her. All of these things require you to stick your neck out on the line emotionally in some way. You’re making yourself vulnerable when you do them. In this way, vulnerability represents a form of power, a deep and subtle form of power. A man who’s able to make himself vulnerable is saying to the world, “I don’t care what you think of me; this is who I am, and I refuse to be anyone else.” He’s saying he’s not needy and that he’s high status.
  • Show your rough edges. Stop trying to be perfect. Expose yourself and share yourself without inhibition. Take the rejections and lumps and move on because you’re a bigger and stronger man. And when you find a woman who loves who you are (and you will), revel in her affection.
  • The biggest criticism of showing interest to a woman that you want to be with is that it immediately shows you as highly invested in her responses. When you say, “You’re cute and I wanted to meet you,” that translates roughly to, “Hi, I want to be with you and am officially invested in the prospect of it happening.” What they miss though is the sub-communication going on underneath what’s actually being said. The sub-communication is, “I’m totally OK with the idea of you rejecting me, otherwise I would not be approaching you in this manner.”
  • True honesty is only possible when it is unconditional. The truth is only the truth when it is given as a gift — when nothing is expected in return. When I tell a girl that she is beautiful, I say it not expecting anything in return. Whether she rejects me or falls in love with me isn’t important in that moment. What’s important is that I’m expressing my feelings to her. I will give compliments only when I am honestly inspired to give them, and usually after already meeting a woman and displaying to her that I’m willing to disagree with her, willing to be rejected by her and willing to walk away from her if it ever comes to that.
Advertisements

Notes from David Bryant Copeland’s “Build Awesome Command-Line Applications in Ruby 2”

The experience of writing and running automated checks, as well as building some personal apps that run on the terminal, in recent years, has given me a keen sense of appreciation on how effective command-line applications can be as tools. I’ve grown fond of quick programming experiments (scraping data, playing with Excel files, among others), which are relatively easy to write, powerful, dependable, and maintainable. Tons of libraries online help interface well with the myriad of programs in our desktop or out in the web.

Choosing to read “Build Awesome Command-Line Applications in Ruby 2” is choosing to go on an adventure about writing better CLI apps, finding out how options are designed and built, understanding how they are configured and distributed, and learning how to actually test them.

Some notes from the book:

  • Graphical user interfaces (GUIs) are great for a lot of things; they are typically much kinder to newcomers than the stark glow of a cold, blinking cursor. This comes at a price: you can get only so proficient at a GUI before you have to learn its esoteric keyboard shortcuts. Even then, you will hit the limits of productivity and efficiency. GUIs are notoriously hard to script and automate, and when you can, your script tends not to be very portable.
  • An awesome command-line app has the following characteristics:
    • Easy to use. The command-line can be an unforgiving place to be, so the easier an app is to use, the better.
    • Helpful. Being easy to use isn’t enough; the user will need clear direction on how to use an app and how to fix things they might’ve done wrong.
    • Plays well with others. The more an app can interoperate with other apps and systems, the more useful it will be, and the fewer special customizations that will be needed.
    • Has sensible defaults but is configurable. Users appreciate apps that have a clear goal and opinion on how to do something. Apps that try to be all things to all people are confusing and difficult to master. Awesome apps, however, allow advanced users to tinker under the hood and use the app in ways not imagined by the author. Striking this balance is important.
    • Installs painlessly. Apps that can be installed with one command, on any environment, are more likely to be used.
    • Fails gracefully. Users will misuse apps, trying to make them do things they weren’t designed to do, in environments where they were never designed to run. Awesome apps take this in stride and give useful error messages without being destructive. This is because they’re developed with a comprehensive test suite.
    • Gets new features and bug fixes easily. Awesome command-line apps aren’t awesome just to use; they are awesome to hack on. An awesome app’s internal structure is geared around quickly fixing bugs and easily adding new features.
    • Delights users. Not all command-line apps have to output monochrome text. Color, formatting, and interactive input all have their place and can greatly contribute to the user experience of an awesome command-line app.
  • Three guiding principles for designing command-line applications:
    • Make common tasks easy to accomplish
    • Make uncommon tasks possible (but not easy)
    • Make default behavior nondestructive
  • Options come in two-forms: short and long. Short-form options allow frequent users who use the app on the command line to quickly specify things without a lot of typing. Long-form options allow maintainers of systems that use our app to easily understand what the options do without having to go to the documentation. The existence of a short-form option signals to the user that that option is common and encouraged. The absence of a short-form option signals the opposite— that using it is unusual and possibly dangerous. You might think that unusual or dangerous options should simply be omitted, but we want our application to be as flexible as is reasonable. We want to guide our users to do things safely and correctly, but we also want to respect that they know what they’re doing if they want to do something unusual or dangerous.
  • Thinking about which behavior of an app is destructive is a great way to differentiate the common things from the uncommon things and thus drive some of your design decisions. Any feature that does something destructive shouldn’t be a feature we make easy to use, but we should make it possible.
  • The future of development won’t just be manipulating buttons and toolbars and dragging and dropping icons to create code; the efficiency and productivity inherent to a command-line interface will always have a place in a good developer’s tool chest.

Dockerizing Our Legacy Apps: Some Notes

I spent the recent weeks of January tinkering with Docker in both Windows 7 and Mac OS. I played with it a lot because I thought it’s something that’s useful for a grand new project we have at work, and I thought that integrating our legacy application code to it would help me learn about it more. And the exercise did help me understand the tool better, including some nuances with application performance and database connections. I was able to dockerize our legacy apps too! 🙂

Some notes to remember related to the exercise:

  • Windows 7 Docker Toolbox and Docker for Mac has a performance issue with volume mounts through docker-compose. Legacy apps composed of a large number of files (especially with dependency directories) will run, but they will be painfully slow out-of-the-box. Fortunately for Docker for Mac users, docker-sync has an effective workaround for this problem. It involves running an in-sync container for the application code, separate from the docker-compose file. Unfortunately, I have not found any workarounds for said performance issue for Windows 7 (and perhaps Windows 8) Docker Toolbox users.
  • Often we have to update the host file of our server machine so that we can run applications locally using a distinct, easy-to-remember URL through a browser. This means we need to add extra hosts to necessary docker containers too if we dockerize our apps. We can do this by using the extra_hosts command in docker-compose.
  • The official Postgresql docker container does not include the pdo, pdo_pgsql, and pgsql drivers, which handles the connection between the application and the database. To install those drivers inside the official container, we’ll need to use a Dockerfile and build it from the docker-compose file with the build and context commands.
  • Sometimes we have a need to copy the Postgresql DB files from a running container to set up a proper volume mount of database data from host to container. We can copy that data by using the convenient docker cp <source> <destination> command. I had this work in Docker for Mac. However, for Windows 7 Docker Toolbox users, a docker container is unable to use such copied data, perhaps because of the difference in OS between host and container, so I had to resort to restoring and backing up data every time I start and stop my application containers.
  • As a tester, what Docker provides me is a convenient tool to test all sorts of interesting application configurations as much as I want to on a single machine, see if the apps break if I changed some service config, and find out which configurations work or not. I can add or remove a new service, or even update an existing service to a new version, like updating PHP from 5.6 to 7.1, and immediately see what impact it has on the apps themselves. These kinds of tests are often left to operations engineers, but I’m glad there is a now a way to do such tests on my own machine, before application changes even reach a dedicated testing server.
  • Even if Docker makes it easy to setup an application development environment from scratch with docker-compose and Dockerfiles, it is still important to maintain a wiki of the necessary machine configurations a programmer needs to perform in order to reset or build the apps with only a single command, or two. Subtle things like custom docker-compose files, .env and php.ini files, host files, Nginx configs, or turning long docker commands into shortcuts with shell scripts or make commands.
  • Makefile tasks can help put specific scripts into an easy-to-remember command with context. I assume Rakefile does the same thing in Ruby, or Jakefile in JavaScript.
  • Dockerizing our legacy apps pushed me on a discussion with programmers about the ways they run said applications on their machines. Most of them actually just test code changes directly on Staging or another available development server. That speaks about one habit we have as a development team, and likely the reason why our apps are a pain to setup locally.

Five People and Their Thoughts (Part 5)

It’s been a while since I’ve shared interesting articles and recordings about topics surrounding software development and testing.

Here are a few videos, for the curious:

  • Continuous Delivery – Sound’s Great But It Won’t Work Here (by Jez Humble, about continuous delivery, the excuses organizations tell people when they fail to implement it, and what those excuses actually mean)
  • There and Back Again – A Hobbit / Developer / Tester’s Journey (by Pete Walen, on how software was built in the old days, how testing and programming broke up into silos, and a challenge for both parties to go back at excelling at each other’s skills and teaming up)
  • Feature Injection (by Chris Matts, about business analysts and tea bags, understanding that requirements are just dependencies, and finding out where the value is in requested features by asking for examples)
  • A Day of Mob Programming (by Woody Zuill, on mob programming, and how taking a whole team approach to coding can help us align and build better software together)
  • Testability vs Automatability (by Alan Richardson, about the differences between testability and automatability, what each term actually mean, and recognizing how specific words can help people tell stories better or not)

One-on-Ones

The past few years I managed in-office software testing knowledge-sharing and tools-tutoring sessions for junior testers at work. It’s not something fancy, sessions are minimal but somewhat regular, and my goal was only to pass on some of the interesting lessons I’ve come to believe to be true based on studying and experience. I want them to become curious about the industry they’re in and I want them to take ownership of their own growth as testers.

I’ve always done the sessions in a group since there are only a few of them. It’s easier to manage that way. But this year I’ll try working with each tester one on one. And this year I’m having them select a skill they think is something they need to learn more of instead of me just providing some agenda in the group meetings. Each tester will have a weekly one-hour one-on-one schedule with me, we’ll review what the tester understands about the topic of their choosing, we’ll study together, and then I’ll provide challenges for them to go through until the next one-on-one. It’s an engaging setup I haven’t done before, something that’s likely to eat more of my time and attention but something that warrants testing since the organization recently changed work schedules to working remotely about half of the time.

Pacing Ourselves Well

For any software tester or any software professional it matters to be curious about the industry we choose to be a part in, to be knowledgeable about the people we work with and the tools we use to perform our best work. It helps to be aware of existing practices and be in the loop with the news, well, because that’s how we find solutions to problems, and sometimes more problems to solve. We use the software we test. We find answers on the web. We try applications that could maybe make us be more productive. We network with people like us on the internet, and we go online, study, and digest whatever we can. That’s part of how we improve our skills. That’s part of how we grow, and help others along the way.

But learning takes time and effort and energy. It’s not just about reading and watching everything, taking every online course and going to every conference there is. Our minds does not work like that. We have to pace ourselves well. We need to take breaks in between, we need time for details to sink in, we need contemplation and scrutiny to guide us where to move next and why exactly. Some reflection happens in discussions with colleagues, family, and friends, while other realizations occur only when we are truly alone with ourselves.

Extending The Avenues Of Performing Testing

Last Wednesday afternoon I anxiously asked my boss for permission to make changes on our application code repository. I said I wanted to try fixing some of the reported bugs listed on our tracking system, if there are no other resources available to pass them to. I made a case about myself not posing any problems because of the code review process built into our repository management tool, that there’s no reason for me to merge any changes without getting feedback from a senior developer first.

He smiled at me and gleefully said “Go ahead. I’m not going to stop you.“, to which I beamed and heartily replied “Thanks, boss!”

This is a turning point in my software testing career, to be able to work on the application code directly as needed. It is actually one of my biggest frustrations – to not be able to find out for myself where the bug lives in the code and fix them if necessary. It’s always a pain to be able to do nothing but wait for a fix, and for a fix to be dependent on the resources available. In my head I think that I’m available and maybe I can do something, but I don’t explicitly have access to the application itself and the code that runs it so I can’t do anything until I have the rights to do so. That’s how it always been. Software testers are often not expected to fiddle with code, at least in my experience, especially in the past where automation was not yet known to be useful as a testing tool. Now that I have the skills and the permission to work on the application repository, I feel that my reach for making an impact on application quality has now expanded remarkably well.

Now bug-fixing is not software testing work in the traditional sense. But I figured there’s no harm in trying to fix bugs and learning the nitty-gritty details of how our legacy applications actually run deep in the code. I believe that learning technical stuff helps me communicate better with programmers. It helps me test applications in a more efficient manner too. Of course I have to consistently remind myself that I am a software-tester-first-programmer-second guy and have to be careful not to fill my days playing with code and forgetting to explore our applications themselves. That said, there are ideas I really want to experiment within our software development process, towards the goal of improving code quality and feedback, and I can only tinker with those ideas inside the application repository itself. Dockerized testing environments, code linting, and unit tests are three things I want to start building for our team, ideas that I consider to be very helpful in writing better code but has not been given enough priority through the years.

I think I’m still testing software, just extending the knowledge and practice of the various ways I perform testing.