NetBeans Xdebug 调试老是等待链接

如果你确认你的 NetBeans 及 Xdebug 配置无误,但 NetBeans 就是显示等待链接,那么请接着往下看。

默认 NetBeans 的设置,PHP 调试器端口是 9000,默认 xdebug.remote_port 是 9000,这个配置是没有任何问题的。而 PHP-FPM 的默认监听端口也是 9000,所以,问题就是因为 9000 端口已经被 PHP-FPM 占用,NetBeans 链接 Xdebug 调试器端口显然就链接不上了。

解决办法:要么更改 xdebug.remote_port 和 NetBeans 的端口,要么更改 PHP-FPM 的监听端口(建议使用 Unix Sock)。

参考网址:http://www.cnblogs.com/azev/archive/2009/08/09/1542227.html

nginx + php-fpm 开启 PATH_INFO 模式

很多框架默认路由都是 PATH_INFO 模式,比如默认在 Apache 并且没有 rewrite 时,CodeIgniter 一般可以这样访问 /index.php/controller/action ,那么 nginx 和 php-fpm 如何设置支持 PATH_INFO 模式呢?

php.ini 中一个与 PATH_INFO 有关的设置是 cgi.fix_path 默认为 1,我们将其设置为 0。

php.ini 设置:

cgi.fix_path = 0

接下来是 nginx 配置:

location ~ \.php($|/) {
# 下面这一行设置 $fastcgi_script_name 和 $fastcgi_path_info 的值,具体请看 nginx 文档
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
# 下面这行也可以为 fastcgi_pass unix:/var/run/php-fpm.sock 看你的 fpm 设置了
    fastcgi_pass   127.0.0.1:9000;
    fastcgi_index  index.php;
    include        fastcgi_params;
# 下面这行不能少默认 fastcgi_params 里面并没有 SCRIPT_FILENAME 
    fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
    fastcgi_param  PATH_INFO          $fastcgi_path_info;
}

看官如有疑问,请在下面留言,希望对您有帮助。

相关链接:

【转】拷贝复制命令行输出放在系统剪贴板上

为什么要这么做?

  • 直接把命令的输出(比如grep/awk/sed/find或是你的程序输出结果)放到剪切板上,这么就可以在IM中CTRL + V粘贴后发出去。
    避免操作的繁琐和跳跃:把结果输出到文件、用文本编辑器打开文件、选中文本、CTRL + C。
  • 通过命令将文件内容拷贝到剪切板,以避免拷贝错误、操作的跳跃(跳到文件编辑器)

Windows下

使用系统自带的 clip 命令。
# 位于 C:\Windows\system32\clip.exe。

示例:

echo Hello | clip 
# 将字符串Hello放入Windows剪贴板
 
dir | clip
# 将dir命令输出(当前目录列表)放入Windows剪贴板
 
clip < README.TXT   
# 将readme.txt的文本放入Windows剪贴板
 
echo | clip 
# 将一个空行放入Windows剪贴板,即清空Windows剪贴板

 

Linux下

使用 xsel 命令。

示例:

cat README.TXT | xsel
cat README.TXT | xsel -b # 如有问题可以试试-b选项
xsel < README.TXT 
# 将readme.txt的文本放入剪贴板
 
xsel -c
# 清空剪贴板

 

Mac下

使用 pbcopy 命令。 # 对应有个 pbpaste 命令。

示例:

echo 'Hello World!' | pbcopy
# 将字符串Hello World放入剪贴板

 

最佳实践

要复制结果又想看到命令的输出

命令的结果输出时,如果给复制命令(即上面提到的命令clip、xsel、pbcopy)那么命令输出就看不到了。如果你想先看到命令的输出,可以下面这么做。

$ echo 'Hello World!' | tee tmp.file.txt
Hello World!
$ xsel < tmp.file.txt
$ rm tmp.file.txt

 

即先使用 tee 命令把输出输到控制台和一个文件中。

命令执行完成后,再把输出的内容放到剪贴板中。

复制SSH的公有KEY

使用下面的命令:

$ pbcopy < ~/.ssh/id_rsa.pub

注:不同系统使用不同的复制命令

避免用文本编辑器打开这个文件、选中文本、CTRL + C这样繁琐操作。

参考资料

来自:http://oldratlee.com/post/2012-12-23/command-output-to-clip

PHP 中 print 和 echo 的区别

echo 不表现得像一个函数, 所以不能总是使用一个函数的上下文。 另外,如果你想给echo 传递多个参数, 那么就不能使用小括号。

print 有很多人说 print 是函数,严格来讲不是,虽然 print 有返回值,PHP 官网也说了:print 实际上不是一个函数(它是一个语言结构),因此你可以不必使用圆括号来括起它的参数列表。

print 与 echo 最大的区别是 print 有返回值,而 echo 没有。

实际上 print 的表现更像一个操作符。

以下是一些代码样例:

echo 1; // 合法
print 1; // 合法
echo (1); // 合法
print (1); // 合法
echo 1, 2, 3; // 合法
print 1, 2, 3; // 不合法 !!!
echo (1, 2, 3); // 不合法 !!!
print (1, 2, 3); // 不合法 !!!
5 + echo 1; // 不合法 !!!
5 + print 1; // 合法,因为 print 是有返回值的

var_dump(function_exists('print')); // 输出 bool(false) 也说明了 print 不是函数

最后的结论就是完全没必要使用 print,echo 可以输出逗号分隔的多个值,比较方便。PHP 代码编译后的 bytecode,echo 效率也会比 print 高(print 有返回值嘛,肯定要多一个执行步骤)。

在 PHP 中,什么时候用 stdClass,什么时候用 array

PHP 编程中,如果一个函数要返回多个值,可以以对象 stdClass 的方式,也可以以数组 array 的方式返回数据。那么我们应该什么时候用 stdClass,什么时候用 array 呢?还是都用 array ?

这位开发者的说法是:

  • 当返回有固定结构的数据时,使用对象:
$person
    -> name = "John"
    -> surname = "Miller"
    -> address = "123 Fake St"
  •  当返回列表时使用数组:
"John Miller"
"Peter Miller"
"Josh Swanson"
"Harry Miller"
  •  当返回一组有固定结构的数据时使用对象组成的数组:
$person[0]
    -> name = "John"
    -> surname = "Miller"
    -> address = "123 Fake St"

$person[1]
    -> name = "Peter"
    -> surname = "Miller"
    -> address = "345 High St"

对象不适合保存一组数据,因为总是需要根据属性名去获取属性值,数组可以保存一组数据,也可以保存有固定结构的数据。但是具体使用哪种就看开发者的风格和喜好了。

该开发者给出了一个建议或者说是一般做法,但是并没有给出一个强制的结论。

另外需要注意的是,array 效率比 stdClass 高,请看如下代码:

<?php

$t = microtime(true);
for ($i = 0; $i < 1000; $i++) {
	$z = array();
	for ($j = 0; $j < 10000; $j++) {
		$z['a'] = 'a';
		$z['b'] = 'b';
		$z['c'] = $z['a'] . $z['b'];
	}
}
echo microtime(true) - $t, PHP_EOL;

$t = microtime(true);
for ($i = 0; $i < 1000; $i++) {
	$z = new stdclass();
	for ($j = 0; $j < 10000; $j++) {
		$z->a = 'a';
		$z->b = 'b';
		$z->c = $z->a . $z->b;
	}
}
echo microtime(true) - $t, PHP_EOL;

最终输出结果是:

QQ20140215-1

 

可以看到,array 比 stdClass 确实要快一些。好吧,这点时间其实可以忽略不计啦~

我的结论?结论就是:你喜好用 stdClass 就用 stdClass,你喜好用 array 就用 array 咯,但是同一个项目里最好保持一致,不要有的函数返回对象,而有的函数又返回数组。

【转】优化 PHP 应用的性能

What I will say in this answer is not specific to Kohana, and can probably apply to lots of PHP projects.

Here are some points that come to my mind when talking about performance, scalability, PHP, …
I’ve used many of those ideas while working on several projects — and they helped; so they could probably help here too.
First of all, when it comes to performances, there are many aspects/questions that are to consider:

  • configuration of the server (both Apache, PHP, MySQL, other possible daemons, and system); you might get more help about that on ServerFault, I suppose,
  • PHP code,
  • Database queries,
  • Using or not your webserver?
  • Can you use any kind of caching mechanism? Or do you need always more that up to date data on the website?

 

Using a reverse proxy

The first thing that could be really useful is using a reverse proxy, like varnish, in front of your webserver: let it cache as many things as possible, so only requests that really need PHP/MySQL calculations (and, of course, some other requests, when they are not in the cache of the proxy) make it to Apache/PHP/MySQL.

  • First of all, your CSS/Javascript/Images — well, everything that is static — probably don’t need to be always served by Apache
    • So, you can have the reverse proxy cache all those.
    • Serving those static files is no big deal for Apache, but the less it has to work for those, the more it will be able to do with PHP.
    • Remember: Apache can only server a finite, limited, number of requests at a time.
  • Then, have the reverse proxy serve as many PHP-pages as possible from cache: there are probably some pages that don’t change that often, and could be served from cache. Instead of using some PHP-based cache, why not let another, lighter, server serve those (and fetch them from the PHP server from time to time, so they are always almost up to date)?
    • For instance, if you have some RSS feeds (We generally tend to forget those, when trying to optimize for performances) that are requested very often, having them in cache for a couple of minutes could save hundreds/thousands of request to Apache+PHP+MySQL!
    • Same for the most visited pages of your site, if they don’t change for at least a couple of minutes (example: homepage?), then, no need to waste CPU re-generating them each time a user requests them.
  • Maybe there is a difference between pages served for anonymous users (the same page for all anonymous users) and pages served for identified users (“Hello Mr X, you have new messages”, for instance)?
    • If so, you can probably configure the reverse proxy to cache the page that is served for anonymous users (based on a cookie, like the session cookie, typically)
    • It’ll mean that Apache+PHP has less to deal with: only identified users — which might be only a small part of your users.

About using a reverse-proxy as cache, for a PHP application, you can, for instance, take a look atBenchmark Results Show 400%-700% Increase In Server Capabilities with APC and Squid Cache.
(Yep, they are using Squid, and I was talking about varnish — that’s just another possibility ^^ Varnish being more recent, but more dedicated to caching)

If you do that well enough, and manage to stop re-generating too many pages again and again, maybe you won’t even have to optimize any of your code 😉
At least, maybe not in any kind of rush… And it’s always better to perform optimizations when you are not under too much presure…
As a sidenote: you are saying in the OP:

A site I built with Kohana was slammed with an enormous amount of traffic yesterday,

This is the kind of sudden situation where a reverse-proxy can literally save the day, if your website can deal with not being up to date by the second:

  • install it, configure it, let it always — every normal day — run:
    • Configure it to not keep PHP pages in cache; or only for a short duration; this way, you always have up to date data displayed
  • And, the day you take a slashdot or digg effect:
    • Configure the reverse proxy to keep PHP pages in cache; or for a longer period of time; maybe your pages will not be up to date by the second, but it will allow your website to survive the digg-effect!

About that, How can I detect and survive being “Slashdotted”? might be an interesting read.

 

On the PHP side of things:

First of all: are you using a recent version of PHP? There are regularly improvements in speed, with new versions 😉
For instance, take a look at Benchmark of PHP Branches 3.0 through 5.3-CVS.

Note that performances is quite a good reason to use PHP 5.3 (I’ve made some benchmarks (in french), and results are great)
Another pretty good reason being, of course, that PHP 5.2 has reached its end of life, and is not maintained anymore!

Are you using any opcode cache?

  • I’m thinking about APC – Alternative PHP Cache, for instance (peclmanual), which is the solution I’ve seen used the most — and that is used on all servers on which I’ve worked.
  • It can really lower the CPU-load of a server a lot, in some cases (I’ve seen CPU-load on some servers go from 80% to 40%, just by installing APC and activating it’s opcode-cache functionality!)
  • Basically, execution of a PHP script goes in two steps:
    • Compilation of the PHP source-code to opcodes (kind of an equivalent of JAVA’s bytecode)
    • Execution of those opcodes
    • APC keeps those in memory, so there is less work to be done each time a PHP script/file is executed: only fetch the opcodes from RAM, and execute them.
  • You might need to take a look at APC’s configuration options, btw
    • there are quite a few of those, and some can have a great impact on both speed / CPU-load / ease of use for you
    • For instance, disabling [apc.stat](http://php.net/manual/en/apc.configuration.php#ini.apc.stat) can be good for system-load; but it means modifications made to PHP files won’t be take into account unless you flush the whole opcode-cache; about that, for more details, see for instance To stat() Or Not To stat()?

 

Using cache for data

As much as possible, it is better to avoid doing the same thing over and over again.

The main thing I’m thinking about is, of course, SQL Queries: many of your pages probably do the same queries, and the results of some of those is probably almost always the same… Which means lots of“useless” queries made to the database, which has to spend time serving the same data over and over again.
Of course, this is true for other stuff, like Web Services calls, fetching information from other websites, heavy calculations, …

It might be very interesting for you to identify:

  • Which queries are run lots of times, always returning the same data
  • Which other (heavy) calculations are done lots of time, always returning the same result

And store these data/results in some kind of cache, so they are easier to get — faster — and you don’t have to go to your SQL server for “nothing”.

Great caching mechanisms are, for instance:

  • APC: in addition to the opcode-cache I talked about earlier, it allows you to store data in memory,
  • And/or memcached (see also), which is very useful if you literally have lots of data and/or areusing multiple servers, as it is distributed.
  • of course, you can think about files; and probably many other ideas.

I’m pretty sure your framework comes with some cache-related stuff; you probably already know that, as you said “I will be using the Cache-library more in time to come” in the OP 😉

 

Profiling

Now, a nice thing to do would be to use the Xdebug extension to profile your application: it often allows to find a couple of weak-spots quite easily — at least, if there is any function that takes lots of time.

Configured properly, it will generate profiling files that can be analysed with some graphic tools, such as:

  • KCachegrind: my favorite, but works only on Linux/KDE
  • Wincachegrind for windows; it does a bit less stuff than KCacheGrind, unfortunately — it doesn’t display callgraphs, typically.
  • Webgrind which runs on a PHP webserver, so works anywhere — but probably has less features.

For instance, here are a couple screenshots of KCacheGrind:

KCacheGrind : main screen KCacheGrind : Callgraph exported as an image

(BTW, the callgraph presented on the second screenshot is typically something neither WinCacheGrind nor Webgrind can do, if I remember correctly ^^ )
(Thanks @Mikushi for the comment) Another possibility that I haven’t used much is the the xhprofextension : it also helps with profiling, can generate callgraphs — but is lighter than Xdebug, which mean you should be able to install it on a production server.

You should be able to use it alonside XHGui, which will help for the visualisation of data.

 

On the SQL side of things:

Now that we’ve spoken a bit about PHP, note that it is more than possible that your bottleneck isn’t the PHP-side of things, but the database one…

At least two or three things, here:

  • You should determine:
    • What are the most frequent queries your application is doing
    • Whether those are optimized (using the right indexes, mainly?), using the EXPLAIN instruction, if you are using MySQL
    • whether you could cache some of these queries (see what I said earlier)
  • Is your MySQL well configured? I don’t know much about that, but there are some configuration options that might have some impact.

Still, the two most important things are:

  • Don’t go to the DB if you don’t need to: cache as much as you can!
  • When you have to go to the DB, use efficient queries: use indexes; and profile!

 

And what now?

If you are still reading, what else could be optimized?

Well, there is still room for improvements… A couple of architecture-oriented ideas might be:

  • Switch to an n-tier architecture:
    • Put MySQL on another server (2-tier: one for PHP; the other for MySQL)
    • Use several PHP servers (and load-balance the users between those)
    • Use another machines for static files, with a lighter webserver, like:
      • lighttpd
      • or nginx — this one is becoming more and more popular, btw.
    • Use several servers for MySQL, several servers for PHP, and several reverse-proxies in front of those
    • Of course: install memcached daemons on whatever server has any amount of free RAM, and use them to cache as much as you can / makes sense.
  • Use something “more efficient” that Apache?

Well, maybe some of those ideas are a bit overkill in your situation ^^
But, still… Why not study them a bit, just in case ? 😉

 

And what about Kohana?

Your initial question was about optimizing an application that uses Kohana… Well, I’ve posted someideas that are true for any PHP application… Which means they are true for Kohana too 😉
(Even if not specific to it ^^)

I said: use cache; Kohana seems to support some caching stuff (You talked about it yourself, so nothing new here…)
If there is anything that can be done quickly, try it 😉

I also said you shouldn’t do anything that’s not necessary; is there anything enabled by default in Kohana that you don’t need?
Browsing the net, it seems there is at least something about XSS filtering; do you need that?

Still, here’s a couple of links that might be useful:

 

Conclusion?

And, to conclude, a simple thought:

  • How much will it cost your company to pay you 5 days? — considering it is a reasonable amount of time to do some great optimizations
  • How much will it cost your company to buy (pay for?) a second server, and its maintenance?
  • What if you have to scale larger?
    • How much will it cost to spend 10 days? more? optimizing every possible bit of your application?
    • And how much for a couple more servers?

I’m not saying you shouldn’t optimize: you definitely should!
But go for “quick” optimizations that will get you big rewards first: using some opcode cache might help you get between 10 and 50 percent off your server’s CPU-load… And it takes only a couple of minutes to set up 😉 On the other side, spending 3 days for 2 percent…

Oh, and, btw: before doing anything: put some monitoring stuff in place, so you know what improvements have been made, and how!
Without monitoring, you will have no idea of the effect of what you did… Not even if it’s a real optimization or not!

For instance, you could use something like RRDtool + cacti.
And showing your boss some nice graphics with a 40% CPU-load drop is always great 😉
Anyway, and to really conclude: have fun!
(Yes, optimizing is fun!)
(Ergh, I didn’t think I would write that much… Hope at least some parts of this are useful… And I should remember this answer: might be useful some other times…)

原文链接:http://stackoverflow.com/questions/1260134/optimizing-kohana-based-websites-for-speed-and-scalability

PHP 中 define() 和 const 定义常量时的区别

自 PHP 5.3.0 起,有两种方式定义常量,使用 const 关键字或者 define() 函数:

const FOO = 'BAR';
define('FOO', 'BAR');

这两种方式最根本的区别在于 const 在编译时定义,而 define 在运行时定义。

一、const 不能在条件语句中使用,使用 const 关键字定义常量必须处于最顶端的作用区域:

if (...) {
    const FOO = 'BAR';    // 错误
}
// 但是
if (...) {
    define('FOO', 'BAR'); // 正确
}

二、const 定义常量值必须是一个定值,不能是变量,类属性,数学运算的结果或函数调用,官网说明见这里;而 define 定义常量时可以使用表达式的值:

const BIT_5 = 1 << 5;    // 错误
define('BIT_5', 1 << 5); // 正确

三、const 定义的常量名不能是表达式,而 define 可以,因此下面的代码是合法的:

for ($i = 0; $i < 32; ++$i) {
    define('BIT_' . $i, 1 << $i);
}

四、const 定义的常量名大小写敏感,而 define 可以在定义常量时指定第三个参数为 true 定义一个大小写不敏感的常量:

define('FOO', 'BAR', true);
echo FOO; // BAR
echo foo; // BAR

说明:有人说在 PHP 5.3 之前的版本里面,const 语句只能用在类定义里而不能再全局定义域使用,这点笔者没有去考证,都啥年代了,还用 PHP 5.2 ?另外请注意, PHP 官网上对 const 的说明是放在类与对象里面讲的,也能表明 const 最初设计是用来定义类里面的常量的。

本文文字主要翻译总结自该问题下得票最高的答案:http://stackoverflow.com/questions/2447791/define-vs-const

相关链接如下:

NetBeans 历史记录里面不显示 SVN log 的解决方法

不需要什么额外插件,NetBeans 默认支持版本控制的 svn log,使用起来相当方便。

昨天新配置了环境发现,历史记录里面不能加载到 svn log。我已经安装 TortoiseSVN 最新版本还是不行。

解决方法:首先将 TortoiseSVN 的安装目录文件下的 bin 文件夹添加到环境变量 PATH 中,然后删除项目,再新建项目则可以正常使用了。

MySQL 整数类型指定宽度

MySQL 中可以为整数类型指定宽度,例如有如下表:

CREATE TABLE `tmp01160800` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `is_enable` tinyint(1) unsigned NOT NULL DEFAULT '0',
  PRIMARY KEY (`id`),
  KEY `is_enable` (`is_enable`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

is_enable 字段显然只需要显示 0 或者 1,可能在应用逻辑中表示这个用户是否激活等,因此我们给其类型为 tinyint,并且指定其宽度 1。最初我以为 tinyint(1) 只能存储 0 到 9 的值,后来发现插入 10、99、125 等值都可以。实际上 tinyint 存储范围是 -128~127,刚好是 8 位 1 字节所能表示的整形范围,tinyint unsigned 的存储范围则是 0~255。所以指定整数宽度对应用来说意义不大,不管你指定多宽,实际上可以存储的范围是一样的。

下面文字来自《高性能 MySQL》一书:

MySQL 可以为整形类型指定宽度,例如INT(11),对大多数应用来说这是没有意义的:它不会限制值的合法范围,只是规定了 MySQL 的一些交互工具(例如 MySQL 命令行客户端)用来显示字符的个数。对于存储和计算来说,INT(1) 和 INT(20) 是相同的。

近日总结

好长时间没写博客了,2014 年一月份马上过半,离回家过年的日子也更近。想着,如果不写点东西,一月份的博客量很可能为零。

10月23号的时候买了个外置显示器,当时贪图便宜,由于大学时用的 AOC 感觉还不错,这次就在京东上找了个便宜的 AOC 显示器,21.5寸只要699,性价比挺高。从此我就有了双显示器,左边文档,右边编码,确实很方便。但是,便宜没好货啊,这个显示器效果不太好,甚至可以说很差,很不清晰,更严重的是有时屏幕上方一部分会变灰,把显示器电源关闭再开启就好了。同学也买了同一型号的显示器,他的也出现了这个问题。这是个教训,以后一定不能一味的贪图便宜。

上周购买了一个 Macbook Air 13 寸,并加配到了 8 G内存,周三下单,周四送达,周五苹果官网就做活动降了700,你能想想我什么感觉么?不过当时我是在教育优惠商店购买的,已经比正常价格便宜了500,所以要说吃亏,也就200了。然后周五那天趁着降价,我又下单了个 iPad air,现在还在配送中。

说说 Macbook 的使用感受吧。送达那天,在公司就开箱了,确实很轻很薄,做工很好,看到就想去触摸它。但是也没有之前想象中的那么好,并且当时还不会用触控板,右键实在是弄出不能。看网页字体也没有 Windows 7 舒服。晚上下班回家后才搞明白触控板的使用,确实很强大,太棒了。键盘目前还在适应中,键盘太小,反馈太弱,反向键太小都是导致不适应的原因。键盘不是很好用,比起我之前使用的 Cherry 机械键盘来手感差太多。但是呢,如果使用外置键盘的话,想使用自带触控板肯定不方便,键盘摆在电脑前肯定会挡住电脑的。所以,只能慢慢去适应笔记本自带键盘了。

前两天都在安装相应软件,首先安装就是 Homebrew,然后用 brew 命令安装了 nginx mysql gcc gdb python 等,PHP 自己手动下载编译安装的。环境搭建好了,目前一切 OK。

感觉 Mac OS 就是在 Linux 基础了构建了一个特别出色稳定的图形界面,并且把相应的设置需要的软件都很用户设置安装好。我下载了 pdf,不用安装做任何操作已经能打开,offce 文档也是毫无压力吧,系统自带了办公软件,就是不知道是否支持微软格式(呵呵,我从没用过),一般的音乐歌曲,iTunes可以直接播放。Mac 是真正意义上的开箱即用啊。

我最喜欢的一个功能就是,在单词上三个手指同时点按就可以显示单词的释义,这个功能是全局的,任何软件都可以使用,不需要任何额外的设置,支持中文英文,真是太棒了,对于我这种爱装逼有英文原版手册绝不看中文手册的人来说实在是方便到爆啊。

上周还买了个雷电接口到DVI的转接器,现在就可以用双显示器了,插上双显示器直接就可以用,太方便了,不用做任何设置和调整。

总的来说,我很满意。

说说工作吧,最近公司的有一个新项目使用了 CodeIgniter 框架,我对这个框架比较熟悉了,因为之前看过其源码,内部怎么实现的也比较了解,所以选用了这个框架。现在问题慢慢暴露出来,和同事们经常讨论一个问题是功能该放在哪个 Model 里面,这个处理过程放在 Contronller 里面好,还是放在 Model 里面好,感觉自己都有点混乱了。并且目前我认真研究过的只有 CodeIgniter 框架,也不知道其他框架这种情况是如何处理的。有时间多多研究下其它流行框架。貌似现在的框架都是使用 Composer 自动构建了,这玩意很方便啊,autoload 一下什么都来了。

前一个多月都再恶补算法,看《算法导论》,但看书总感觉吃力,并觉得乏味,后来开始在 Leetcode 上做题,边做边学吧,不会做的就再网上搜搜看别人的解答。Leetcode 上只能使用 C++ 和 Java 两种语言,当时衡量了一下使用哪种来做题,后来确定了就用 C++。对 C++ 的 vector unordered_map 有了很大的了解。说实话,读大学时学 C++ 那会我除了知道 C++ 用 cout 来进行输出和可以定义 class 外,看不出它和 C 的任何区别,好吧,直到现在我也不太清楚区别在哪儿。

接下来还要学习的有数据库,现在对于数据库的了解除了写一些增删改查的 SQL 外其它的一窍不通,得好好补一下,各个引擎的区别啊,数据库的设计技巧啊,数据库在执行一个 SQL 时会有哪些步骤啊,搞懂这些无疑会帮助自己更会进行项目架构设计。有空需要读一下 PHP 源代码,PHP 里面强大的数组是如何实现的啊,sort 方法是快速排序吧,为什么 PHP 的函数名和类名不区分大小写啊,一个 PHP 扩展是如何加载和运行的,我相信这些能在 PHP 源码里面找到答案。有空多看看新出来的 PHP 框架,CI 毕竟有点老了,只知道它也太局限了,Laravel 是怎么运行的,Slim 难道只是一个路由吗,CakePHP设计 和 CI 有什么区别,这些新出来的框架大量使用了 PHP5.3 以上的面向对象支持会智能很多吗,我相信我能找到答案。

知识日新月异,技术层出不穷,只有保持一颗活到老学到老的态度才能适应了解这些潮流,并最终运用好新的技术来为工作服务。

加油努力吧。