What’s My MongoDB Password After Installing the AWS-Bitnami MEAN Stack AMI?

bitnami-logoBitnami’s MEAN stack AMI installation generates a random ‘application’ password for you.  This is your ‘root’ user’s Mongodb password. To find this password, open your AWS ‘EC2 Management Console’, right-click on the row of your launched instance and select ‘Get System Log’.  A console-like pop-over will appear. Search the log text for something like the following:

#########################################
#                                                                                                                     #
# Setting Bitnami application password to ‘CvQyXko0F8tz’ #
#                                                                                                                     #
#########################################

Now you have the username = ‘root’, password = ‘CvQyXko0F8tz’.  For example, you can express your mongodb connection as

'mongodb://root:CvQyXko0F8tz@localhost/mydb'

.

If Scotch.IO Were a Man, I’d Marry Him.

scotch-glassSometimes you run across someone who completely inhabits your own world view. He completely gets you. Knows where you’re coming from. Knows where you’ve been. He brings a warm, familiar, and happy feeling, like a country song that ends well.

Scotch.IO, the web site, is like that for me.

Scotch.IO is the site of a design and development firm out of Las Vegas and D.C.

What completely wins me over — and this is key — is that Scotch.IO loves to share. It shows me what’s new. It shows me how to do things. Scotch.IO makes me a better person.  The site is about  tips and tutorials biased toward around the MEAN stack and modern Web development tools and techniques and everything I, myself, hold dear.

The content is all original, but it’s all so consistently excellent and on the mark that it seems curated. The perfect web site for me. I’m in love.

Top 3 Executive Summaries: How Twitter Works

twitterThere are technologies I sometimes purposely avoid working with because, knowing myself, I’ll want to learn more and more and I’ll download, install, and play with all the libraries and frameworks and I’ll Google all the major contributors and check out their other GitHub repositories and maybe contribute an open-source thing-that-hasn’t-been-done-with-it-yet and generally spend a long and furious and obsessive amount of time on something that won’t immediately help pay the mortgage.

Twitter was one of these

But recent projects required working with the Twitter Platform and I needed to know what I didn’t know. Curated list follows:

1. Jessica Hische’s patient and easy walkthrough by example of the twitter protocol. It’ll make some sense after reading this.

http://www.momthisishowtwitterworks.com/

2. Explania’s great 3-minute animated tutorial:

http://www.explania.com/en/channels/technology/detail/twitter-explained

3. For the executive who needs at least to recognize the buzzwords, so he can nod in understanding at the appropriate times during the developers’ stand-ups:

https://dev.twitter.com/docs

Top 3 Things to Know about Iproute2

  1. It’s the modern way to control TCP/IP networking and traffic in Linux. As an analogy, Iproute2 is to Net-tools as Git is to Clear Case.
  2. Net-tools is deprecated. Iproute2 consolidates the net-tools’ commands into the “ip” command and takes it to the next level. See the wikipedia entry.
  3. If you invoke “ip” from your Linux command line and get a response, you have iproute2.
$ ip
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | addr | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddr | mroute | mrule | monitor | xfrm |
                   netns }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -f[amily] { inet | inet6 | ipx | dnet | link } |
                    -l[oops] { maximum-addr-flush-attempts } |
                    -o[neline] | -t[imestamp] | -b[atch] [filename] |
                    -rc[vbuf] [size]}

Curated Links

Some Useful Commands
How To
IP Command Reference
Load Balancing and Shaping by Realms Example 
Socket Statistics on Linux

 

The Return of Sockets.

Two decades ago, I had to quickly integrate a number of individual research applications under a cohesive management interface. I ended up spawning tasks and streaming results from one module to another through BSD sockets. There really was no other solution. It was quick to do and the performance of the system was incredible.

Then things became heavy with CORBA and SOAP and the like and it felt like the programming world became more monolithic and bureaucratic and very heavy.

It’s great to see the return of sockets and streaming to the mainstream. I think machine-to-machine communication is going to get fun again.

Check this out: http://zeromq.org/

http://architects.dzone.com/articles/mqtt-over-websockets

It’s Promises, All the Way Down

I’ve been working with AngularJS and using its Promises for a while now.

I tend to think of a Promise as an IOU that Angular hands me. No…  more as an empty box for data that Angular has yet to retrieve. Often the data must be retrieved from the Internet, or a database, or some other place, all too far away in time —  for a snappy, no-lag UI/X that people expect nowadays — to rely on.

In Angular, I don’t need the data before I start building a view with it.  I  use its empty-box Promise in stead.

The thing is,  it is Angular that first hands me the empty Promise box in response to, say, a database request.

I choose, at some point in my program, to hand the Promise box, unopened, back to Angular to build a web page screen around the boxes contents.

It is Angular that builds the page, with everything but the data. It is Angular that quietly stuffs the data in the Promise box when it finally gets the real thing. And  it is Angular that rather dramatically pulls the true data out of  the box and pops it on the screen.

I never had to look in the box.

Promises work great and can remain forever unseen under the covers of the Angular framework.  Being able to move on in the programming sequence and create your typical CRUD lists and forms based on objects as-yet unrealized feels good for me as UI programmer. It feels good for me and it feels good for the user. It just flows.

But I want to provide an experience that exceeds the typical CRUD displays. And this may mean working with the real data of the promises.

I want to know when the actual data arrives, just as Angular does behind the scenes.  To do this, I take the Promise into my own hands.  I get $promise from the resource object and call the “$promise.then”  method, passing in my own call-back function for the promise system to call back if and when it retrieves the data successfully. As below, I make it clear that  I want my data mining task  to be launched with the true data in the call below, and I move on to complete other event set-ups.

resource.$promise.then(function(trueData){
  // No more promises.
  dataMine(trueData);
}

And then I wait. I wait for the ‘then’.  And when the ‘then’ data arrives, I move on. In real time.  Spawning a task that is based on that data.

In my own practice,  there is always an end to this chain of Futures.

But I got to thinking, how nice would it be for an entire framework or language to use only promises, from end to end? My technique of  triggering  certain tasks only when the real data arrives wouldn’t make sense.  I would never need to wait; I would assume I have a promise. I would blithely move forward, deeper in the code, spawning each task based on a promise, carrying this on to the program’s nether end.

It would be Promises, all the way down.

So then, why can’t I do that now? With what I’ve got? Is this kiting the future?

Stay tuned…