You may want to next your app within a different folder to make your dev environment cleaner. You can do this by moving your App.js
or App.tsx
into a /app
folder.
I'm going to move my app into a /app
folder but you may use /src
etc.
Say my structure is currently like this:
.expo/
App.tsx
ios/
node_modules/
.gitignore
app.json
package.json
...
First we will create our /app
folder.
We can then move our App.tsx
(App.js
) and the rest of my app /assets
etc to this folder.
Within that folder we will create an AppEntry.tsx
file:
import registerRootComponent from 'expo/build/launch/registerRootComponent';
import App from './App';
registerRootComponent(App);
Lastly, we need to tell the app where the entrypoint is. We will need to update our package.json
file by setting the main
:
{
"name": "my-app",
"version": "1.0.0",
"main": "app/AppEntry.tsx",
"scripts": {
"start": "expo start",
"android": "expo run:android",
"ios": "expo run:ios",
"web": "expo start --web"
},
...
}
You will end up with a structure like:
.expo/
app/App.tsx
app/AppEntry.tsx
app/assets/logo.png
ios/
node_modules/
.gitignore
app.json
package.json
...
That's it, start your app with npm run ios
and away you go. Good luck!
Home Assistant is a powerful open-source platform for smart home automation, and with the increasing integration of technology into our daily lives, having access to your home automation system while on the go can be incredibly convenient. If you're an Apple CarPlay user, you can now extend the reach of Home Assistant to your car, allowing you to control various smart devices on the move. In this guide, we'll walk you through the process of setting up Home Assistant on Apple CarPlay.
Before you begin, ensure you have the following:
Setting up Home Assistant on Apple CarPlay brings your smart home control to the driver's seat, providing a seamless and convenient way to manage your devices while on the move. With the integration of CarPlay, Home Assistant continues to demonstrate its commitment to making home automation accessible wherever you are. Try out this guide, and enjoy the convenience of controlling your smart home right from your car. Safe driving!
]]>One of the primary advantages of Node.js is that it allows developers to use the same programming language, i.e., JavaScript, on both the server-side and client-side. This means that developers can use the same programming language for both front-end and back-end development, which reduces the learning curve and makes it easier to switch between different development tasks.
Node.js is known for its high performance and scalability. It is designed to handle a large number of simultaneous connections and requests, making it ideal for building real-time applications such as chat applications, online games, and collaborative tools. Additionally, Node.js uses an event-driven, non-blocking I/O model, which makes it highly efficient and able to handle large amounts of data without slowing down or crashing.
Node.js has a large and active community of developers, which means that there is a wealth of resources available for developers to learn from and leverage. The Node.js community has created a vast ecosystem of modules, packages, and tools that make it easier to build and maintain Node.js applications.
As mentioned earlier, Node.js uses JavaScript, which is one of the most popular programming languages in the world. This means that developers who are already familiar with JavaScript can easily learn Node.js and start building applications. Additionally, Node.js has a simple and intuitive API, making it easy for developers to get started quickly.
Node.js is a cross-platform runtime environment, which means that developers can use it on Windows, macOS, Linux, and other operating systems. This makes it easier to build and deploy applications across different platforms without having to make significant changes to the code.
Node.js is designed to handle large-scale applications and can easily scale to meet the needs of growing businesses. It uses a modular approach to building applications, which means that developers can add new functionality and features as needed without having to rewrite the entire application.
In conclusion, Node.js is an excellent choice for building high-performance, scalable, and real-time applications. Its ease of use, cross-platform compatibility, and vast ecosystem of modules and tools make it an ideal platform for developers to build applications quickly and efficiently. So, if you are looking for a powerful and flexible platform for your next project, consider using Node.js.1.
]]>Before you begin, you will need the following:
To install Home Assistant, follow these steps:
pip install homeassistant
Once Home Assistant is installed, you need to set it up. Here are the steps:
hass
http://localhost:8123
.Once you have set up Home Assistant, you can start adding devices and integrations. Here's how:
Configuration
in the Home Assistant sidebar.Integrations
and then click the +
button.Congratulations! You have successfully installed and set up Home Assistant on your computer.
Now you can start adding devices and integrations to control your smart home from a single location. If you encounter any issues during the installation process, refer to the Home Assistant documentation or seek help from the Home Assistant community.
]]>Home Assistant is an open-source home automation platform that allows you to control all of your smart home devices from a central location. With Home Assistant, you can integrate all of your smart devices into a single platform, create customized automations, and monitor your home's status and activity.
Home Assistant supports a wide range of smart home devices, including lights, thermostats, sensors, cameras, and more. It also supports popular smart home protocols like Zigbee, Z-Wave, Wi-Fi, and Bluetooth.
Getting started with Home Assistant is easy, but it does require some technical know-how. To get started, you will need to download and install the Home Assistant software on a device like a Raspberry Pi or a dedicated server.
Once you have installed Home Assistant, you can start adding your smart devices to the platform. Home Assistant supports a wide range of devices and protocols, but you will need to ensure that your devices are compatible with the platform before you start adding them.
One of the great benefits of Home Assistant is the ability to customize your smart home automation. With Home Assistant, you can create complex automations that trigger based on a variety of conditions and events.
For example, you can create an automation that turns on your living room lights when you enter the room and turns them off when you leave. You can also create automations that adjust your thermostat based on the time of day or the weather outside.
In addition to creating automations, Home Assistant also supports custom scripts that can be triggered by voice commands or other events. This allows you to create custom actions that are not supported by your smart home devices out of the box.
Home Assistant also provides a dashboard that allows you to monitor your smart home devices and activity. The dashboard provides real-time information about your home's status, including the temperature, humidity, and energy usage.
You can also set up alerts and notifications that notify you when certain conditions are met. For example, you can set up an alert that notifies you when the front door is opened or when a motion sensor is triggered.
Home Assistant is a powerful home automation platform that provides a comprehensive solution for managing your smart home devices. With Home Assistant, you can integrate all of your devices into a single platform, create custom automations, and monitor your home's status and activity.
If you're looking to take your smart home to the next level, Home Assistant is definitely worth checking out. While it does require some technical know-how to get started, the customization and control it provides over your smart home devices is unparalleled.
]]>helpkb is a superfast and easy to use knowledge base / FAQ to help your customers get the info they need, when they need it most.
It's been proven that empowering your customers and staff to self serve and access information quickly and easily will boost customer satisfaction, reduce queries and make everyone's life easier. We've created helpkb to do just that. A FREE, super fast and easy to use knowledge base or FAQ so information is always on hand.
So checkout the documentation / demo, and follow our guide to get started building your knowledge base / FAQ today!
Once you've entered your income and your income cycle you will get a beautiful report showing all the calculated values:
Sometimes you just want to deploy your Next.js website on the server and not build locally as stated above. To do this we are going to setup a simple shell script and use PM2 to deploy with no downtime.
An example PM2 ecosystem.config.js
file in the root of your project:
module.exports = {
apps: [
{
name: 'my-app',
script: 'npm run start',
cwd: '/Users/mrvautin/Documents/Code/my-app/',
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
],
deploy: {
production: {
user: 'my-user',
host: 'my-server',
key: '/Users/mrvautin/.ssh/id_rsa',
ssh_options: 'ForwardAgent=yes',
ref: 'origin/main',
repo: 'git@github.com:mrvautin/my-app.git',
path: '/var/www/html/my-app',
'post-deploy': 'sh nextjs-pm2-deploy.sh'
}
}
};
An example package.json
with our deploy
script:
{
"name": "my-app",
"version": "0.1.0",
"private": true,
"scripts": {
"deploy": "pm2 deploy production"
},
"dependencies": {},
"devDependencies": {}
}
Now the contents of the nextjs-pm2-deploy.sh
shell script referenced in the post-deploy
section of the ecosystem.config.js
file above:
echo "Deploy starting..."
npm run install || exit
BUILD_DIR=temp npm run build || exit
if [ ! -d "temp" ]; then
echo '\033[31m temp Directory not exists!\033[0m'
exit 1;
fi
rm -rf .next
mv temp .next
pm2 reload all --update-env
pm2 reset all
echo "Deploy done."
Basically, this script will install our app, set the build path to /temp
, build the app into /temp
, check the /temp
directory exists then move the contents over and reset our PM2 instance.
All this happens in an instant and should see your app deployed with no noticeable downtime.
]]>Raspberry Pi Imager
has made things simple with a new config panel to setup before writing the image.
Raspberry Pi Imager
hereRaspberry Pi Imager
on your OSRaspberry Pi Imager
CTRL
+SHIFT
+X
CMD
+SHIFT
+X
or CTRL
+SHIFT
+X
squido is a dead simple static website builder which can be hosted anywhere for super fast static HTML websites and very little effort.
The advantage of squido is that is has all the basics to build and deploy a static website built into the core. This means you don't have to waste time learning the ins and outs, writing code and play around with deployment. You simply do the writing and customization of style / layout and hit deploy.
Static websites have many benefits seen here but sometimes it's best to simply try for yourself.
So checkout the documentation, clone one of the demo repos and get started building your website today!
]]>A static website is comprised entirely of HTML, CSS and Javascript code. In the past static websites were coded by hand but now there are a few builder tools which can compile and build a static website for you.
Speed: Static website generally render much faster than a dynamic website due to not having complex rendering, database queries etc.
Cheap: Static websites can be developed and designed by almost anyone meaning there is reduced costs employing a developer to setup and maintain your website.
Simplicity: Code is easy to read, easy to write and and easier to maintain. Templates/themes are normally provided and can easily be altered to suite your website needs.
Hosting: There are more hosting options available for a static website, many of which are even free - eg: Github pages or Netlify which grab your code, build it and host it right from your Git repository. Server hosting needs less resources too due to only serving static content and not needing a Database and server processing.
Simplicity: Simplicity comes at a cost. Static websites lose the ability to do complex processing, database queries etc.
Limitations: There are certain things you simply cannot do making static websites suited to certain website types.
Whilst static websites are not suited for all situations, there are some really good instances where a static website is a good alternative to a complex dynamic one. Such as:
The easiest way to get started is to grab yourself a builder like squido. There is some boiler plate / template examples for a blog or a documentation website to get you started.
You can simply clone these repositories, edit the template files, add your colors to the CSS and add your content. You can then follow the steps to deploy to a hosting provider.
]]>envz
is that this process is made super simple and easy to understand leading to less mistakes.
# with npm
npm install envz
# or with Yarn
yarn add envz
Repo: https://github.com/mrvautin/envz
You should use envz
as early on in the entry point of your app as possible. Eg: app.js
or index.js
file which loads your app.
Rather than override process.env.x
object, envz
will return a new object to use throughout your app.
const { envz } = require('envz');
Create a env.yaml
or any other named file and load it:
const env = envz('env.yaml');
The idea is that the process.env
will be merged with loaded yaml
file.
env
uses a cascading (sequential order) configuration method which is easier to understand looking at an example.
base:
PORT: 1234
config:
default: test
development:
PORT: 3000
DATABASE: dev
config:
token: 12345
secret: fwdsdgl
production:
PORT: 80
DATABASE: prod
config:
token: 67890
key: puwndklf
truthy: true
allowed:
- card
- phone
The idea here is that the values in base
are loaded, anything in development
overrides that and finally production
overrides that depending on the NODE_ENV
set.
For example, when a NODE_ENV
of development
is set the following env
object is returned:
PORT: 3000,
config: {
default: 'test',
token: 12345,
secret: 'fwdsdgl'
},
DATABASE: 'dev'
...
Eg: Where the PORT
of 3000 from development
overrides the base
setting of 1234. If the NODE_ENV
is set to production
, then the PORT
will be set to 80.
The idea behind base
(or whatever you want to call it) is that you don't need to redefine defaults over and over for each environment.
You can set the environment manually rather than using NODE_ENV
by adding an environment
object. Eg:
const env = envz('env.yaml', { environment: 'production' });
By default the values set in process.env
overrides what is set in your yaml file. You can change this so that the yaml file is king by adding the following flag:
const env = envz('env.yaml', { yamlFileOverride: true });
Sometimes you may want to store changes back to your envz
config. You can easily do this by importing save
:
const { save } = require('envz');
The save
method takes an object with two values:
envfile
: The yaml file you are wanting to updatedata
: The object you want to update back to the file. See tests and example below.// In this case we will be adding to the `base` config but you can easily
// replace `base` with `production` or whatever environment.
const saveObj = await save({
envfile: 'test.yaml',
data: {
base: {
config: {
default: 'default-key'
}
}
}
});
This will result in the test.yaml
being updated:
base:
PORT: 1234
config:
default: default-key
...
]]>Setting up snippets is easy as:
Mac
Code > Preferences > User Snippets > Select a file or create a new one
Windows
File > Preferences > User Snippets > Select a file or create a new one
Once setup, snippets are triggered by pressing:
CTRL+Space
Sometimes its easier to look at an example for the Snippets syntax.
A simple console.log
can be sped up using the following syntax. Once triggered the snippet will create a console.log
line and drop your cursor into the middle with single quotes wrapping.
{
"Console log": {
"scope": "javascript,typescript",
"prefix": "log",
"body": [
"console.log('$1');"
],
"description": "Log output to console"
}
}
Simple console logging of text:
{
"Console log": {
"scope": "javascript,typescript",
"prefix": "log",
"body": [
"console.log('$1');"
],
"description": "Log output to console"
}
}
Quick and easy logging of the variable in your clipboard.
{
"Console log variable": {
"scope": "javascript,typescript",
"prefix": "log var",
"body": [
"console.log('${CLIPBOARD}', ${CLIPBOARD});"
],
"description": "Console log variable"
}
}
Quick for loop
{
"For Loop": {
"prefix": ["for", "for-const"],
"body": ["for (const ${2:element} of ${1:array}) {", "\t$0", "}"],
"description": "A for loop."
}
}
Wrapping code blocks in the markdown code block syntax
{
"Syntax highlighting": {
"scope": "markdown",
"prefix": "highlight",
"body": [
"``` javascript",
"${TM_SELECTED_TEXT}",
"```"
],
"description": "Markdown highlight syntax"
}
}
]]>For more information on variables available see the official snippet docs.
This guide assumes you know your way around Node.Js and have it installed along with NPM.
Firstly you are going to want to setup your board in the Arduino IDE. We will be flashing some simple Wifi firmware to get it on your Wireless network then we can talk to it using Node.Js.
Select the board in the Arduino IDE:
Tools > Board > ESP8266 Boards > WeMos D1 R2
Plugin your board using a USB cable
Open the Wifi firmware:
File > Examples > Firmata > StandardFirmataWifi
You are going to need to setup your Wifi SSID and Passphrase in the WifiConfig.h
file. You shouldn't need to touch the StandardFirmataWifi.h
file at all.
Scroll to the section which has the Wifi SSID configuration and enter the name of your Wifi network (SSID):
// replace this with your wireless network SSID
char ssid[] = "your_network_name";
Scroll to the section which has the Security configuration and enter your passphrase or Wifi password:
#ifdef WIFI_WPA_SECURITY
char wpa_passphrase[] = "your_wpa_passphrase";
#endif //WIFI_WPA_SECURITY
Thats it. You can now compile and upload the code to your board using the Upload
button
Once that is complete, your board will reset and hopefully connect to your Wifi network.
You can now login to your router to check the Wireless clients and determine the IP address of your board. At this point you might like to reserve an IP address using the MAC address for your board so it doesn't change on restart and kill your Node.Js code.
Now we are going to setup our Node.Js code to do some simple requests/commands.
Install our dependencies
npm i etherport-client johnny-five --save
Your package.json
should look something like this:
{
"name": "nodejs-test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"author": "",
"license": "ISC",
"dependencies": {
"etherport-client": "^0.1.4",
"johnny-five": "^2.0.0"
}
}
Now to our Node.Js code. We are going to make the little blue light flash which sits next to the silver WeMos
chip on our board:
You will need to change the IP address to the one you found in step 9.
const { EtherPortClient } = require('etherport-client');
const { Board, Led } = require('johnny-five');
const board = new Board({
port: new EtherPortClient({
host: '192.168.0.201',
port: 3030
}),
repl: false
});
const LED_PIN = 2;
board.on('ready', () => {
console.log('Board ready');
var led = new Led(LED_PIN);
led.blink();
});
Now run your code and check the output in the console and the light action on your board.
You should see some output like this:
1610519728478 SerialPort Connecting to host:port: 192.168.0.201:3030
1610519728496 Connected Connecting to host:port: 192.168.0.201:3030
Board ready
And some light action here:
La Marzocco GS3 (SVG)
Logos
]]>Message: You are recommended to have at least 150 MB of memory available for smooth operation. It looks like you have ~87 MB available.
Adding the --no-mem-check
quickly bypasses this error and gets you on your way.
Command: ghost update --no-mem-check
We have created a very simple and beautiful loan calculator so you can quickly and easily see these figures before taking the big step and applying.
Enjoy! Loan calculator
]]>Setup your first DNS in your cluster
Fill in Database as "admin", Username/Password as per the user setup in MongoDB Atlas.
Skip SSH tab
Click "Use SSL protocol" then select "Self-signed Certificate" from the dropdown.
Sure Wordpress has all the bells and whistles with plugins for just about everything but they generally take your website from a blog to a shopping cart or CMS etc. You need to ask yourself, do I really need this stuff? If the answer is no and you just need a blog then Ghost is the way to go.
Ghost allows you to quickly and easily setup a beautiful and powerful blog within minutes. There is a cloud hosted option (cost) or a free host your own option.
Best of all, Ghost is powerful but not vulnerable and requiring updates every 10 minutes for the 50 Wordpress plugins you have installed.
So next time you are looking for some blogging software, give Ghost a go!
]]>Firstly you will want to create your db.js
file which will export some handy database related functions.
File: db.js
const mongoClient = require('mongodb').MongoClient;
const mongoDbUrl = 'mongodb://127.0.0.1:27017';
let mongodb;
function connect(callback){
mongoClient.connect(mongoDbUrl, (err, db) => {
mongodb = db;
callback();
});
}
function get(){
return mongodb;
}
function close(){
mongodb.close();
}
module.exports = {
connect,
get,
close
};
After creating this file you can simply require
it and you now have few functions at our disposal. connect
, get
, close
.
File: app.js
You will then want to call connect()
before your application starts and the server starts listening. Eg:
db.connect(() => {
app.listen(process.env.PORT || 5555, function (){
console.log(`Listening`);
});
});
Now you have access to your database connection anywhere in your application by simply requiring the db.js
file and using the get()
function.
File: users.js
(routes file for example)
const db = require('./db');
router.get('/users', (req, res) => {
db.get().collection('users').find({}).toArray()
.then((users) => {
console.log('Users', users);
});
});
It just makes everything much cleaner and easy to handle this way. I hope this helped you in some way.
]]>You simply need to run:
cat ~/.ssh/id_rsa.pub | ssh root@droplet_ip_address "sudo sshcommand acl-add dokku laptop"
\r
\r\n
and <br>
are ignored.
The solution:
<br> <br> <br>
No worries, glad I could help!
]]>CNAME
. When the request comes into Heroku the platform will return the "No such application" error. The Heroku support team suggests adding a custom domain to the Heroku dashboard for each SaaS user. I could see this getting out of hand so I decided to implement a proxy server to fix the issue.
Firstly you will want to add your own wildcard custom domain to your Heroku application and also create a CNAME
with your DNS provider.
Adding your CNAME
with your DNS provider to point to Heroku:
Hostname: *.mydomain.com
Path: my_heroku_app_name.herokuapp.com
Adding your domain to the Heroku dashboard:
Domain Name: *.mydomain.com
DNS Target: my_heroku_app_name.herokuapp.com
You will then want to setup your proxy server. I spun up a new Digital Ocean droplet and setup Nginx to proxy the requests.
The Nginx config would look like this:
server {
listen 80 default_server;
server_name proxy.mydomain.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host-customdomain.mydomain.com;
proxy_redirect off;
proxy_pass http://my_heroku_app_name.herokuapp.com;
}
}
Basically what is happening is your Nginx server is adding the Host
header and proxing the request onto Heroku. The Heroku router will then read the Host
header and determine which application to select. It's then that your application will need to determine the domain in the request and serve the correct SaaS user.
You will then need to setup an A
DNS record with your DNS provider to point to the new proxy server:
Hostname: proxy.mydomain.com
Path: 192.168.0.1 (This IP is your Digital Ocean droplet IP)
You will then want your users of your SaaS application who are wanting a custom domain to point their DNS to proxy.mydomain.com
.
Your Saas application will then need to get the Host
header, remove -customdomain.mydomain.com
and determine who the customer of your SaaS application is.
Having built many Node.Js projects, this would be our first venture into building a scalable SaaS app. After our initial investigation on where we should start, we couldn't find much advice on where to start and things to look out for. We found vague articles on various projects built many years ago but nothing using modern tech, in particular Node.JS.
We are intending this article to be helpful to anyone wanting to build a SaaS app using Node.Js.
Up until this project we built our apps on self managed Digital Ocean VM's. This was fine in isolation but we found it difficult to find any information on scaling and load balancing to grow with the app user base.
We decided to go with a dedicated Node.Js host (Heroku) which used the Dyno type approach. This seemed like the best approach to easily scale as the customer base and load grows.
Generally our database of choice to pair with Node.JS is MongoDB. After trying and considering various other databases, we decided to stick with MongoDB.
Hosting MongoDB yourself is easy enough but we wanted something more reliable with load balancing/redundancy, scaling and backups. There are various options from MongoDB Atlas, Compose.io, mLab etc. After some consideration, we went with mLab for easy of use, scalability and best price.
This is where we spent most of our time trying to figure out the best approach. There are two parts to the app: the front and backend. The frontend is the part of the app which would see all the public traffic. Each customer of our app would have their own FAQ with a subdomain (and optional custom domain) which would see significant traffic. The backend is the management side for our customers where they would manage settings, content, style and more. The backend would receive minimal traffic in comparison to the public facing frontend and so would have much less of a need to scale.
Instead of creating one big app we decided to split them out and run on seperate Heroku plans. This way we can scale the frontend easily whilst leaving the backend as it. It also means we can easily do maintenance, add features etc without affecting the public facing side of the app.
We learnt a lot. First of all, we learn't that making a standalone app into a SaaS is not as easy as it sounds. There are many different aspects which need to be considered and worked through. We found that scalability and being flexible was the key to our success and this is where we spent most of our time. We also found that doing everything and managing everything is not the always the best thing. Leave the server and DB hosting to a dedicated company to manage it for you. As a startup, you can't possibly be professional and perfect at everything. You can always bring services back in house as you grow and your available skill set grows too.
We would love to hear feedback from others who have faced similar hurdles getting their SaaS app off the ground and how they dealt with them.
]]>ezyFAQ allows for customising your FAQ/knowledge base with branding, CSS and HTML. ezyFAQ also allows you to bring your own domain for a seamless integration with your existing website - e.g: help.mydomain.com. The live search, analytics, responsive design (also beautiful on Tablets and Phones), pre-built themes and templates allow you to customise a little or a lot!
ezyFAQ runs its own FAQ using the ezyFAQ platform which you can view here: http://support.ezyfaq.com
More information can be found at www.ezyfaq.com
]]>You can see the basic structure is really easy to understand. We are exposing the multiply()
function as a public function by returning in the module.exports
. The other function aptly named nonPublic()
is called by the multiply()
function but cannot be called publicly. More on this below.
You can see our multiply()
function takes two values, multiplies them and returns a label from our nonPublic()
function, followed by our multiplied value. Easy!
File: multiply.js
// require any modules
module.exports = {
multiply: function (val1, val2, callback){
var returnedValue = val1 * val2;
callback(null, nonPublic() + returnedValue);
}
};
function nonPublic(){
return 'Result: ';
}
File: test.js
Using our new module locally for testing is easy:
var mod = require('./module');
console.log(mod.nonPublic());
mod.multiply(5, 10, function(err, result){
console.log(result);
});
The first line requires our local module. Note: the ./
value for modules located in the same directory.
After we have required it we can go ahead and use it. First we call the nonPublic()
function to show it doesn't work publicly (this outputs an error), then call the multiply()
function.
We pass in 5
and 10
to be multiplied together and we write the result to the console.
To run our test.js
script we simply run the following in our console and observe the output:
node test.js
This is a really basic module which outlines the basic steps to get started on writing your first NPM module.
One of my slightly (hardly) more advanced (has options etc) modules metaget
can be found here as further reading: https://github.com/mrvautin/metaget
The relatively easy way to overcome this is to use an event emitter in your Express app and wait for that to complete before starting your tests. This doesn't appear to be documented anywhere obvious.
You will need to setup the event emitter in your Express app which is the final step before assuming the app has started and is ready. In my case, I had made the DB connection etc and now the call to app.listen
was my final event.
Here is an example:
app.listen(app_port, app_host, function () {
console.log('App has started');
app.emit("appStarted");
});
The specific line is:
app.emit("appStarted");
This creates an event which we can wait on called appStarted
(this can be changed to whatever you want).
Next we need to wait for this event in our Mocha/Supertest tests (test.js
).
First we will require
our Express app. Note: app
is my main Express file, some people use server.js
and this value would then become require('../server')
:
app = require('../app');
We then need to create a Supertest agent using our Express instance:
var request = require("supertest");
var agent = request.agent(app);
Then we wait for our Express event using before()
:
before(function (done) {
app.on("adminMongoStarted", function(){
done();
});
});
Then we can kick off all our tests. A full test example:
var request = require("supertest");
var assert = require('chai').assert;
app = require('../app');
var agent = request.agent(app);
before(function (done) {
app.on("appStarted", function(){
done();
});
});
describe("Add config",function(){
it("Add a new connection",function(done){
agent
.post("/add_config")
.expect(200)
.expect("Config successfully added", done);
});
});
]]>authorStats
fetches your daily/weekly/monthly download stats for all your authored NPM packages and outputs a nice table right in your command line.
It's best to install the package globally:
npm install author-stats -g
authorStats <npm username>
Where <npm username>
is the username on the NPM website. My profile is: https://www.npmjs.com/~mrvautin
and username is mrvautin
.
A nice command line table with the daily, weekly and monthly download numbers of all your packages will be output to your terminal.
Note: If you have a lot of packages you will need to be patient while authorStats
fetches the data.
The application is designed to be easy to use and install and based on search for simplicity rather than nested categories. Simply search for what you want and select from the results. expressCart uses powerful lunr.js to index the products and enable the best search results.
Website: https://expresscart.markmoffat.com/
Demo: https://expresscart-demo.markmoffat.com
Homepage:
Admin manage settings:
Popout cart:
Dashboard:
Using PM2 is the easiest and best option for running production websites. See the PM2 for more information or a short guide here: https://mrvautin.com/running-nodejs-applications-in-production-forever-vs-supervisord-vs-pm2/.
]]>This pushed me to design ghostStrap
which can easily be used as a starting point for anyone wanting to create a theme using the Bootstrap standard.
Some commands may need sudo
cd content/themes/
git clone https://github.com/mrvautin/ghostStrap.git
General
ghostStrap
from the Theme
dropdownPlease leave a comment if you use the theme or have any feedback.
Homepage
Single post
Menu
Mobile layout
Menu
]]>First of all, you need to turn on the Ghost Public API (which by default is turned off). You will want to jump into your Ghost admin www.myblog.com/ghost
, select Labs
from the menu, scroll to the bottom and check the box Public API
.
Now this is turned on, our code will be able to interact with the API to index and search posts.
We are going to use the following Github repository by Windyo here: https://github.com/Windyo/ghostHunter/ this is a Fork of the popular ghostHunter module but has been updated to use the Ghost API, rather than using and hacking RSS feeds. Ghosthunter
uses the extremely powerful Lunr library to index your posts and provide the best, weighted keyword search results.
You will need to download the file jquery.ghostHunter.min.js
from the Github repository and add it to your theme: /content/themes/mytheme/assets/js/
.
You will then need to add a reference to that file in: /content/themes/mytheme/default.hbs
{% raw %}
<script type="text/javascript" src="{{asset "js/jquery.ghostHunter.min.js"}}"></script>
{% endraw %}
Note: Add it at the bottom of the file after the jQuery reference
Once you have done that you can start adding the search box to your page(s).
You will need some javascript which calls the Ghosthunter module to display the results of the search. You will need to add the following code to your /content/themes/mytheme/assets/js/index.js
file.
There are various options on the Ghosthunter module. I've decided to display results as they are typed and have set the onKeyUp
to true
and have chosen to hide the number of results by setting displaySearchInfo
to false. Check the Github repository for more options.
$(".search-results").addClass("results-hide");
$("#search-field").ghostHunter({
results: "#search-results",
onKeyUp: true,
displaySearchInfo: false,
result_template : "<a href='{{link}}'><li class='list-group-item'>{{title}}</li></a>",
before: function(){
$(".search-results").removeClass("results-hide");
}
});
Note: My theme is using Twitter Bootstrap so you will see references to list-group-item
etc which you can remove and add your own CSS styling.
Next thing you need to do is add some simple CSS to your /content/themes/mytheme/assets/js/screen.css
to format the search and results box.
.search-box{
margin-bottom: 10px;
}
.search-results {
position:absolute;
z-index: 1000;
}
.search-button{
background-color: #1B95E0;
color: white;
}
.results-hide{
display: none;
}
Note: You can edit styling as you wish.
Lastly you will need to add the search box to your template file: /content/themes/mytheme/index.hbs
. You can also add this to your post.hbs
view too if you wish.
<div class="row">
<div class="search-box col-xs-12 col-sm-12 col-md-4 col-md-offset-4 col-lg-4 col-lg-offset-4">
<div class="input-group">
<input type="text" id="search-field" class="form-control input-lg" placeholder="Search for...">
<span class="input-group-btn">
<button class="btn btn-default search-button btn-lg" type="button">Search!</button>
</span>
</div>
</div>
</div>
<section class="search-results col-xs-12 col-sm-12 col-md-8 col-md-offset-2 col-lg-8 col-lg-offset-2" >
<ul id="search-results" class="search-results col-md-12" class="list-group"></ul>
</section>
Please let me know in the comments what you think.
]]>HTTPS
(SSL), you will need to ensure your Web Server is sending the correct Headers to Ghost. Failing to do so can cause your Blog to go into a endless redirect loop and fail to work.
The production section of your Ghost config.js
will look something like this:
production: {
url: 'https://mrvautin.com',
mail: {},
database: {}
}
Depending on your web server the setting is slightly different. We are going to cover off Apache
and Nginx
as they are most popular.
A simple Nginx
config would look like:
server {
listen 443 ssl;
server_name mrvautin.com www.mrvautin.com;
# SSL STUFF
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2368;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The important line above is:
proxy_set_header X-Forwarded-Proto $scheme;
This line ensures the Header which Ghost reads has the correct protocol set.
A simple Apache
virtual host config would look like:
<VirtualHost *:443>
RequestHeader set X-Forwarded-Proto "https"
ProxyPreserveHost On
ServerName mrvautin.com
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/server.crt
SSLCertificateKeyFile /etc/apache2/ssl/server.key
<Location/>
SSLRequireSSL
</Location>
ProxyPass / http://127.0.0.1:2368
ProxyPassReverse / http://127.0.0.1:2368
</VirtualHost>
The important line above is:
RequestHeader set X-Forwarded-Proto "https"
This line ensures the Header which Ghost reads has the correct protocol set.
]]>/content/themes/mytheme/partials/loop.hbs
file of your theme.
In the loop.hbs
file you will see:
{% raw %}
{{excerpt words="26"}}
{% endraw %}
You will need to change the word excerpt
to content
. The new code will be:
{% raw %}
{{content words="26"}}
{% endraw %}
If you are wanting to change the length (amount of words) of the excerpt please see here.
]]>/content/themes/mytheme/partials/loop.hbs
When opening your loop.hbs
file you will see code like the this:
You can change the default Casper value of 26 to any value you want:
You can play around and change this value and see what length best suits your writing style and theme.
]]>npm install metaget --save
var metaget = require("metaget");
metaget.fetch('https://wordpress.com', function (err, meta_response) {
if(err){
console.log(err);
}else{
console.log(meta_response);
}
});
Response will be a Javascript Object containing all the meta tags from the URL. All tags are output in the example above. Some tags with illegal characters can be accessed by:
meta_response["og:title"];
It's possible to set any HTTP headers in the request. This can be done by specifying them as options in the call. If no options are provided the only default header is a User-Agent of "request".
This is how you would specify a "User-Agent" of a Google Bot:
var metaget = require("metaget");
metaget.fetch('https://wordpress.com',{headers:{"User-Agent": "Googlebot"}}, function (err, meta_response) {
if(err){
console.log(err);
}else{
console.log(meta_response);
}
});
git checkout -b my-new-feature
git commit -am 'Add some feature'
git push origin my-new-feature
adminMongo connection information (including username/password) is stored unencrypted in a config file, it is not recommended to run this application on a production or public facing server without proper security considerations.
git clone https://github.com/mrvautin/adminMongo.git && cd adminMongo
npm install
npm start
adminMongo will listen on host: localhost
and port: 1234
by default.
This can be overwritten by adding a config file in /config/app.json
. The config file can also override the default 5 docs per page.
The config file options are:
{
"app": {
"host": "10.0.0.1",
"port": 4321,
"docs_per_page": 15
}
}
After visiting http://127.0.0.1:1234 you will be presented with a connection screen. You need to give your connection a unique name as a reference when using adminMongo and a MongoDB formatted connection string. The format of a MongoDB connection string can form: mongodb://<user>:<password>@127.0.0.1:<port>/<db>
where specifying to the <db>
level is optional. For more information on MongoDB connection strings, see the official MongoDB documentation.
Note: The connection can be either local or remote hosted on VPS or MongoDB service such as MongoLab.
After opening your newly created connection, you are able to see all database objects associated with your connection. Here you can create/delete collections, create/delete users and see various stats for your database.
After selecting your collection from the "Database Objects" menu, you will be presented with the collections screen. Here you can see documents in pagination form, create new documents, search documents, delete, edit documents and view/add indexes to your collection.
You can search documents using the Search documents
button on the collections screen. You will need to enter the key (field name) and value. Eg: key = "_id" and value = "569ff81e0077663d78a114ce".
You can clear your search by clicking the
Reset
button on the collections screen.
Adding and editing documents is done using a JSON syntax highlighting control.
Indexes can be added from the collection screen. Please see the official MongoDB documentation on adding indexes.
git checkout -b my-new-feature
git commit -am 'Add some feature'
git push origin my-new-feature
These are used in your mytheme.theme/Bundles folder.
App Name | Bundle ID |
---|---|
1Password | com.agilebits.onepassword-ios |
500px | com.500px |
9gag | com.9gag.ios.mobile |
Activator | libactivator |
Airbnb | com.airbnb.app |
Amazon | com.amazon.Amazon |
App Store | com.apple.AppStore |
Ask Fm | fm.ask.askfm |
BiteSMS | com.bitesms |
Calculator | com.apple.calculator |
Calendar | com.apple.mobilecal |
Camera + | com.taptaptap.cloudphotos |
Camera | com.apple.camera |
Chase Mobile | com.chase |
Circle the Dot | com.ketchapp.circlethedot |
Clock | com.apple.mobiletimer |
CNN | com.cnn.iphone |
Compass | com.apple.compass |
Contacts | com.apple.MobileAddressBook |
Digg | com.digg.Digg |
Dropbox | com.getdropbox.Dropbox |
Ebay | com.ebay.iphone |
Edline | com.alecgorge.Brebeuf-Edline |
Engadget | com.aol.engadget |
ESPN SportsCenter | com.espn.ScoreCenter |
eTrade | com.etrade.mobileproiphone |
ETSY | com.etsy.etsyforios |
Evernote | com.evernote.Evernote |
Evernote | com.evernote.iPhone.Evernote |
Facebook Groups | com.facebook.Groups |
Facebook Page Admin | com.facebook.PageAdminApp |
Facebook Paper | com.facebook.Paper |
com.facebook.Facebook | |
Facetime | com.apple.facetime |
FB Messenger | com.facebook.Messenger |
Find my iPhone | com.apple.mobileme.fmip1 |
Firefox | org.mozilla.ios.Firefox |
Flappy Bird | com.dotgears.flap |
Fleksy | com.syntellia.Fleksy |
Foap | com.foap.foap |
Game Center | com.apple.gamecenter |
Gamestop | com.gamestop.powerup |
Google + | com.google.GooglePlus |
Google Chrome | com.google.chrome.ios |
Google Chromecast | com.google.Chromecast |
Google Docs | com.google.Docs |
Google Drive | com.google.Drive |
Google Gmail | com.google.Gmail |
Google Inbox | com.google.inbox |
Google Maps | com.google.Maps |
Google Photos | com.google.photos |
Google Search | com.google.GoogleMobile |
Google Translate | com.google.Translate |
Health | com.apple.Health |
iBooks | com.apple.iBooks |
iFile | eu.heinelt.ifile |
iMessage | com.apple.MobileSMS |
iMovie | com.apple.iMovie |
com.burbn.instagram | |
iTunes Connect | com.apple.itunesconnect.mobile |
iTunes Store | com.apple.MobileStore |
Kickstarter | com.kickstarter.kickstarter |
com.linkedin.LinkedIn | |
com.apple.mobilemail | |
Make it Rain | com.SpaceInch.LoveOfMoney |
Maps | com.apple.Maps |
Medium | com.medium.reader |
ModMyi | com.modmyi.ModMyi |
Music | com.apple.Music |
Myspace | com.myspace.iPhone |
Netflix | com.netflix.Netflix |
Notes | com.apple.mobilenotes |
Ookla Speedtest | com.ookla.speedtest |
Oovoo | com.oovoo.iphone.free |
Outlook | com.microsoft.Office.Outlook |
outube | com.youtube.ios.youtube |
Pandora | com.pandora.pandora |
Passbook | com.apple.Passbook |
Paypal | com.yourcompany.PPClient |
Phonto | com.youthhr.Phonto Y |
PhotoMath | com.microblink.PhotoMath Xbox |
Photos | com.apple.mobileslideshow |
Photoshop Express | com.adobe.PSMobile |
Piano Tiles | com.umonistudio.tapTile |
Podcasts | com.apple.podcasts |
Quizup | com.plainvanillacorp.quizup |
com.tyanya.reddit | |
Reeder | ch.reeder |
Release Slow Shutter | com.cogitap.SlowShutter |
Reminders | com.apple.reminders |
Remote Mouse | com.remotemouse.remoteMouse |
Remote | com.apple.Remote |
Safari | com.apple.mobilesafari |
Settings | com.apple.Preferences |
Skype | com.skype.skype |
Smartglass | com.microsoft.xboxavatars |
Snapchat | com.toyopagroup.picaboo |
SoundHound | com.melodis.midomi |
Spotify | com.spotify.client |
Square Cash | com.squareup.cash |
Stay in the Line | com.six8t.StayInTheLine |
Stocks | com.apple.stocks |
Stop Motion | com.cateater.funapps.stopmotion |
Swing Copters | com.dotgears.swing |
Teamviewer | com.teamviewer.teamviewer |
Terminal | com.googlecode.mobileterminal.Terminal |
Testflight | com.apple.TestFlight |
Things | com.culturedcode.ThingsTouch |
Tinder | com.cardify.tinder |
Tips | com.apple.tips |
Tumblr | com.tumblr.tumblr1Password |
Tweetbot 3 | com.tapbots.Tweetbot3 |
Tweetbot 4 | com.tapbots.Tweetbot4 |
Tweetbot iPad | com.tapbots.TweetbotPad |
Tweetbot | com.tapbots.Tweetbot |
com.atebits.Tweetie2 | |
Ultimate Guitar Tabs | com.ultimateguitar.tabs100 |
Viber | com.viber |
Videos | com.apple.videos |
Vidgets | com.lesscode.widgetcenter |
Vimeo | com.vimeo |
Voice Memos | com.apple.VoiceMemos |
Weather | com.apple.weather |
net.whatsapp.WhatsApp | |
Winterboard | com.saurik.Winterboard |
WWDC | developer.apple.wwdc |
WWDC | developer.apple.wwdc-Release |
Yahoo Weather | com.yahoo.weather |
YouTube | com.google.ios.youtube |
Common icon names and sizes (px):
Icon name | Size |
---|---|
AppIcon76x76~ipad.png | 76x76 |
AppIcon29x29@2x.png | 58x58 |
AppIcon40x40@2x.png | 80x80 |
AppIcon60x60@2x.png | 120x120 |
AppIcon76x76@2x~ipad.png | 152x152 |
AppIcon29x29@3x.png | 87x87 |
AppIcon40x40@3x.png | 120x120 |
AppIcon60x60@3x.png | 180x180 |
[Cydia link](cydia://search/nba lockscreen scores)
]]>NOTE: The app may not update the number of notes until it's closed and reopened.
]]>Demo: http://openkb.mrvautin.com
git clone https://github.com/mrvautin/openKB.git && cd openKB
npm install
npm start
/public/stylesheets/
and adding a link in /views/layouts/layout.hbs
you can add your own styling and graphics.admin
can be a little difficult editing Markdown on smaller screens.
Visit: http://127.0.0.1:4444/login
A new user form will be shown where a user can be created.
There are are a few configurations that can be made which are held in /routes/config.js
. If any values have been changed the app will need to be restarted.
Using PM2 seems to be the easiest and best option for running production websites. See the PM2 for more information or a short guide here: http://mrvautin.com/Running-Nodejs-applications-in-production-forever-vs-supervisord-vs-pm2.
]]>Process management
and Webservers
.
One of the aspects you need to think about is keeping your app alive. When running PHP, when your app process crashes or server restarts the application WILL come back online automatically. With Nodejs, if the process crashes or the server restarts the process will NOT start itself. This is where a process manager comes into play, luckily there are a few good ones to choose from.
I will run through some of them and detail their pros and cons but to summarise: I like PM2 for easy setup of personal projects but I definitely recommend setting up systemd for a proper production environment.
Pros
Cons
Pros
Cons
Pros
pm2 list
give an easy to read table of all appsCons
Pros
Cons
I'm not even going to suggest others, I'm a Nginx man through and through and this in my opinion is the best and only choice in running a Nodejs webserver.
Here is a short guide on setting up Nginx for your Nodejs app:
Firstly need to install Nginx. Eg: Ubuntu: $ sudo apt-get install nginx
You the want to create the config for your application
$ sudo nano /etc/nginx/sites-available/myapp
The following is a very basic config to run your application. Basically it will listen for requests to mydomain.com
on HTTP and forward those requests to our app (running with a process manager above) on port 4444
. You will need to change that port to whatever port your app is running/listening on.
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:4444;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
You can then save this file and test your Nginx config with:
$ nginx -t
All going well, you should get something like this:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If not, check the error and adjust your config to resolve.
After a successful test, you now need to reload the new config into Nginx. You can either restart Nginx (will cause short downtime to all apps on server) or reload the config only. Best choice is to reload.
Reload: $ service nginx reload
Restart: $ service nginx restart
Thats it! You should be able to visit your app at: http:mydomain.com
]]>Some of the Meta tags you will want to set are called Open Graph Metadata. When someone shares a link to your website on Facebook, you are telling Facebook what title, description, image etc you want to show in a persons feed. This means when someone shares a status with a URL to your website, Facebook will look at the URL and pull all the Open Graph Metadata in order to show the title, description, and images etc.
All Metadata should be found within the <head>
tag of your HTML document. The basic code of a Metadata is:
<meta property="property_value" content="content_value"/>
Where property_value
is the actual Metadata we want to set and content_value
is the actual value you would like.
A list of Open Graph properties can be found here: http://ogp.me/
The main ones you want to concentrate on are:
The URL of the object being embedded into Facebook. This URL needs to be unique as it is used to collate Likes and shares on the object. The URL shouldn't include any session variables or GET parameters.
Example:
<meta property="og:url" content="http://mrvautin.com/Adding-Facebook-Open-Graph-Metadata-to-your-website"/>
The title, headline or name of the object/article. This is shown when the URL/object is embedded into Facebook.
Example:
<meta property="og:title" content="Adding Facebook Open Graph Metadata to your website"/>
A two sentence description/summary of the article/URL.
Example:
<meta property="og:description" content="A short two sentence description of the article."/>
Here you can include a link to an image you want to show when a URL to your website is shared. Facebook recommends an image at least 600x315 pixels but recommends using a larger image and letting them scale it accordingly. They recommend using an image with a 1.91:1 aspect ratio to avoid cropping. Note: images cannot exceed 5MB in size.
Example:
<meta property="og:image" content="http://mrvautin.com/path_to_image.png"/>
This is the type of URL being shared. Facebook outlines a long list of og:type options but for a general website/blog you will want to use article
.
Example:
<meta property="og:type" content="article" />
<html>
<head>
<meta property="og:url" content="http://mrvautin.com/Adding-Facebook-Open-Graph-Metadata-to-your-website"/>
<meta property="og:title" content="Adding Facebook Open Graph Metadata to your website"/>
<meta property="og:description" content="Adding Facebook Open Graph Metadata to your website"/>
<meta property="og:image" content="http://mrvautin.com/path_to_image.png"/>
<meta property="og:type" content="article" />
</head>
<body>
Content
</body>
</html>
]]>antlers allows for easy templating (themes) using the Handlebars templating engine and includes a few themes out of the box.
To get an idea of how your blog will look, take a look around! This blog is powered by antlers.
Using: npm
or
Manual:
npm install
as an administrator (eg: sudo might be required)node app.js
You can enter the admin panel of your newly created blog by visiting the following URL in your browser: http://localhost:3333/admin. The default user login is: test@test.com and password is: password1. After logging in, you can change the email (username) and password using the "Users" menu.
The easiest way to install Mediatomb is via Optware.
To install Optware you simply need to run:
# wget http://wolf-u.li/u/233-O/ffp/start/optware.sh
# chmod a+x /ffp/start/optware.sh
# /ffp/start/optware.sh start
You can then install Mediatomb by running:
# /opt/bin/ipkg install mediatomb
Then copy the Mediatomb startup script to "start":
# cp /opt/etc/init.d/mediatomb /ffp/start/mediatomb.sh
Then set the correct permissions on the "mediatomb.sh file:
# chmod a+x /ffp/start/mediatomb.sh
You need to change one of the Mediatomb configs to allow autostart:
# vi /opt/etc/default/mediatomb
Ensure MT_ENABLE=true
You can now start Mediatomb by:
# sh /ffp/start/mediatomb.sh start
You can now browse Mediatomb via the Web Interface:
http://localhost:4915
]]>My stereo looks like this:
To adjust the time you need to hold the 'AM' button and at the same time press the number '1' button to adjust the hour or the number '2' button to adjust the minute.
I hope this helps someone.
]]>Download the module here.
Here are the steps to install the module:
stgeorge-woocommerce-1.0.1.zip
to: \wp-content\plugins\
St.George Bank WooCommerce Plugin
via the Plugins
MenuWooCommerce > Settings
via the left menuCheckout
tabSt.George Bank
link at the top of the page.Enable St.George Bank
checkboxResponse URL
and past this link on the St.George Merchant Administration ConsoleGateway URL
which is obtained via the St.George Merchant Administration Console > Payment Page Options > URLIt essentially dynamically sets up a WebBrowser control, loads a URL (waits for it to be completely loaded) and takes an image of the rendered HTML. The solution below creates an image which is the full size of the rendered HTML. You can add padding to the image by adding pixels to the wb.Width
and wb.Height
values.
It's quite simple really. Here is the function to render the HTML:
public Bitmap GenerateScreenshot(string url)
{
// Load the webpage into a WebBrowser control
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(url);
// waits for the page to be completely loaded
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// Take Screenshot of the web pages full width + some padding
wb.Width = wb.Document.Body.ScrollRectangle.Height;
// Take Screenshot of the web pages full height
wb.Height = wb.Document.Body.ScrollRectangle.Height;
// Get a Bitmap representation of the webpage as it's rendered in the WebBrowser control
Bitmap bitmap = new Bitmap(wb.Width, wb.Height);
wb.DrawToBitmap(bitmap, new System.Drawing.Rectangle(0, 0, wb.Width, wb.Height));
wb.Dispose();
return bitmap;
}
You can call it by:
Bitmap thumbnail = GenerateScreenshot("www.google.com");
thumbnail.Save("C:\image file.bmp", ImageFormat.Bmp);
Notes: You can also use C:\test.html
rather than www.google.com
and you can change the output file by adjusting the ImageFormat
value.
That's it.
]]>